Mesham
mediawiki_
http://www.mesham.com/index.php?title=Mesham
MediaWiki 1.32.0
first-letter
Media
Special
Talk
User
User talk
Mesham
Mesham talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
MediaWiki:Sitenotice
8
2
2
2009-12-31T13:29:48Z
Polas
1
Created page with 'Currently Under Construction, please keep checking back!'
wikitext
text/x-wiki
Currently Under Construction, please keep checking back!
004d2ad31df95f551da8cbc15d98c84e52b240bc
MediaWiki:Monobook.css
8
3
5
2009-12-31T13:33:26Z
Polas
1
Created page with '/* CSS placed here will affect users of the Monobook skin */ #ca-edit { display: none; }'
css
text/css
/* CSS placed here will affect users of the Monobook skin */
#ca-edit { display: none; }
d1e56f596937430f27e759fe45a4c0e8dabde0f9
Main Page
0
1
9
2009-12-31T13:41:34Z
Polas
1
moved [[Main Page]] to [[Mesham]]
wikitext
text/x-wiki
#REDIRECT [[Mesham]]
c4c6ccf9e5e60445b93d4fbb23b96bdcb40e6bff
MediaWiki:Mainpage
8
4
11
2009-12-31T13:43:39Z
Polas
1
Created page with 'Mesham'
wikitext
text/x-wiki
Mesham
9deaf65c813c450f1cd04c627b6f6178c9d18fcc
Mesham
0
5
13
2009-12-31T13:44:08Z
Polas
1
Created page with 'Welcome'
wikitext
text/x-wiki
Welcome
ca4f9dcf204e2037bfe5884867bead98bd9cbaf8
14
13
2009-12-31T13:45:21Z
Polas
1
wikitext
text/x-wiki
<div id="mainpage"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome to OSDev.org box -->
{{Welcome}}
{{Help Us}}
{{Stylenav}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= OS Development}}
{{Box|subject= Resources}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Languages}}
{{Box|subject= Tools}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Hardware}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= OS theory}}
|}
cb8a41416b2c92c33d6b88f2a49739b3a11b43ee
15
14
2009-12-31T13:51:04Z
Polas
1
wikitext
text/x-wiki
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= OS Development}}
{{Box|subject= Resources}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Languages}}
{{Box|subject= Tools}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Hardware}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= OS theory}}
|}
d7569db7af95d3b8a70e9f369ccc99670c172d03
16
15
2009-12-31T13:54:46Z
Polas
1
wikitext
text/x-wiki
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Downloads}}
{{Box|subject= Forthcomming}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= Examples}}
|}
df8fad5f55888b56bef9325d55ddfff3fc3fab48
17
16
2009-12-31T14:17:38Z
Polas
1
wikitext
text/x-wiki
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Downloads}}
{{Box|subject= In Development}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= Examples}}
|}
3593ace423c21383da48f15806090ddf329eea6d
18
17
2009-12-31T14:31:11Z
Polas
1
Protected "[[Mesham]]" ([edit=sysop] (indefinite) [move=sysop] (indefinite)) [cascading]
wikitext
text/x-wiki
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Downloads}}
{{Box|subject= In Development}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= Examples}}
|}
3593ace423c21383da48f15806090ddf329eea6d
Template:Box
10
6
27
2009-12-31T13:46:20Z
Polas
1
Created page with '<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.6em 0.8em;"> <h2 style="margin:0;backgr…'
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.6em 0.8em;">
<h2 style="margin:0;background-color:#CEDFF2;font-size:120%;font-weight:bold;border:1px solid #A3B0BF;text-align:left;color:#000;padding:0.2em 0.4em;">{{{subject}}}</h2>
{{{{{subject}}}}}
<div style="text-align: right; margin: 0; padding: 0;"><small>[[:Category:{{{subject}}}|more...]]</small></div>
</div>
6c5cfce0fc3eebf23887165eda8bd927c38ae711
28
27
2009-12-31T14:12:22Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.6em 0.8em;">
<h2 style="margin:0;background-color:#CEDFF2;font-size:120%;font-weight:bold;border:1px solid #A3B0BF;text-align:left;color:#000;padding:0.2em 0.4em;">{{{subject}}}</h2>
{{{{{subject}}}}}
</div>
8c5ee63f20fba5793382dc89154fa5a15dd20e63
Template:Help Us
10
7
32
2009-12-31T13:47:29Z
Polas
1
Created page with '<div style="margin: 0 0 15px 0; padding: 0.2em; background-color: #EFEFFF; color: #000000; border: 1px solid #9F9FFF; text-align: center;"> '''The OSDev Wiki always needs your he…'
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 0.2em; background-color: #EFEFFF; color: #000000; border: 1px solid #9F9FFF; text-align: center;">
'''The OSDev Wiki always needs your help! See the [[Wish List]] for more information.'''
</div>
5b60a497bfa0cfe6c8e8dd7afd2d565738c3d2e6
33
32
2009-12-31T14:00:43Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 0.2em; background-color: #EFEFFF; color: #000000; border: 1px solid #9F9FFF; text-align: center;">
'''Mesham always needs your help! See the [[Wish List]] for more information.'''
</div>
1ca8a4deb9e3f1560c4cf93262d4fac6c65350d9
Template:Welcome
10
8
36
2009-12-31T13:49:11Z
Polas
1
Created page with '<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;"> {| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; bor…'
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.osdev.org/ OSDev.org]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">This website provides information about the creation of<br>operating systems and serves as a [http://forum.osdev.org/ community] for those<br>people interested in OS creation with [[Special:Statistics|{{NUMBEROFARTICLES}}]] wiki articles.</div>
</div>
|}
</div>
a728e06de8040bda39c9840304231ea0050e085f
37
36
2009-12-31T13:58:44Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.mesham.com/ Mesham]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">This website provides a hub of information regarding the parallel programming language Mesham and type oriented programming.</div>
</div>
|}
</div>
125a6d16121535f0b89d5f4b70fae15496d48734
38
37
2009-12-31T13:59:55Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.mesham.com/ Mesham]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">This website provides a hub of information regarding the<br> parallel programming language Mesham and type oriented programming.</div>
</div>
|}
</div>
bf2920efa31b9865969291b453a3a4d4fbd4da9c
39
38
2009-12-31T14:34:47Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.mesham.com/ Mesham]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">Mesham is a type oriented programming language allowing the writing of high performance, parallel, codes which are efficient yet relatively simple to write and maintain.</div>
</div>
|}
</div>
3b842da9a75f3c34c37463692acace46abf0374d
40
39
2009-12-31T14:38:22Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.mesham.com/ Mesham]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">Mesham is a type oriented programming language allowing the writing of high <br>performance parallel codes which are efficient yet simple to write and maintain.</div>
</div>
|}
</div>
1831875e9d0ffec6e245044ea9f980ba8d9a3c5c
Template:Stylenav
10
9
42
2009-12-31T13:49:44Z
Polas
1
Created page with '<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.2em 0.2em; text-align: center;"> '''Disp…'
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.2em 0.2em; text-align: center;">
'''Display: [[Main Page|Short view]] - [[Expanded Main Page|Expanded view]]'''
</div>
3ae3d5e6e4f10637c2693da361aa93b4b26a1bf9
Template:Introduction
10
10
44
2009-12-31T14:04:15Z
Polas
1
Created page with '*[[:Category:What_is_Mesham|What is Mesham?]] *[[:Category:Parallel_Computing|Parallel Computing]] **[[:Category:Communication|Communication]] **[[:Category:Computation|Computati…'
wikitext
text/x-wiki
*[[:Category:What_is_Mesham|What is Mesham?]]
*[[:Category:Parallel_Computing|Parallel Computing]]
**[[:Category:Communication|Communication]]
**[[:Category:Computation|Computation]]
*[[:Category:Type_Oriented_Programming|Type Oriented Programming]]
f91c4f9325f97c09dd4851f00977ca26e99aeda7
45
44
2009-12-31T14:07:56Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type_Oriented_Programming|Type Oriented Programming]]
b82065c51e47399b48911e5c96411a41e620292a
46
45
2009-12-31T14:30:13Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type Oriented Programming|Type Oriented Programming]]
**[[Type Oriented Programming Concept|The Concept]]
**[[Type Oriented Programming Uses|Uses]]
**[[Type Oriented Programming Why Here|Why Use it Here?]]
140266505abf978a2011aacc258805f0a09ceb5a
Template:Downloads
10
11
52
2009-12-31T14:11:40Z
Polas
1
Created page with '*[[Download_all|All (version 0.41b)]] *Runtime Library **[[Download_rtlsource|Source (version 0.41b)]] **Windows 32 Binary (version 0.41b) *[[Download_server|Server (version 0.41…'
wikitext
text/x-wiki
*[[Download_all|All (version 0.41b)]]
*Runtime Library
**[[Download_rtlsource|Source (version 0.41b)]]
**Windows 32 Binary (version 0.41b)
*[[Download_server|Server (version 0.41b)]]
*[[Download_compiler|Compiler (version 0.41b)]]
ccdd3e277e537d649e20c0c9f36f62ab6dda5c56
Template:Examples
10
12
67
2009-12-31T14:16:02Z
Polas
1
Created page with '*[[Gadget-2]] *[[NPB|NASA's Parallel Benchmarks]] *[[Mandelbrot]] *[[Image_processing|Image Processing With Filters]] *[[Prefix_sums|Prefix Sums]] *[[Dartboard_PI|Dartboard metho…'
wikitext
text/x-wiki
*[[Gadget-2]]
*[[NPB|NASA's Parallel Benchmarks]]
*[[Mandelbrot]]
*[[Image_processing|Image Processing With Filters]]
*[[Prefix_sums|Prefix Sums]]
*[[Dartboard_PI|Dartboard method to find PI]]
*[[Prime_factorization|Prime Factorization]]
573ab68ca6b30673472ee5813d1095c5374865b3
Template:In Development
10
13
75
2009-12-31T14:20:22Z
Polas
1
Created page with '*Mesham 2010 **[[General Additions]] **[[Extentable Types]] **[[Wish List]] *[[New Compiler]]'
wikitext
text/x-wiki
*Mesham 2010
**[[General Additions]]
**[[Extentable Types]]
**[[Wish List]]
*[[New Compiler]]
e994bd66c0a478cdf3b688490f8c282a8d4caccf
Template:Documentation
10
14
80
2009-12-31T14:25:21Z
Polas
1
Created page with '*[[Introduction]] **[[Overview]] **[[The Idea Behind Types]] *[[Core Mesham]] **[[Types]] **[[Sequential]] **[[Parallel]] **[[Procedures]] **[[Preprocessor]] *[[Type Library]] **…'
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[Core Mesham]]
**[[Types]]
**[[Sequential]]
**[[Parallel]]
**[[Procedures]]
**[[Preprocessor]]
*[[Type Library]]
**[[Element]]
**[[Attribute]]
**[[Allocation]]
**[[Collection]]
**[[Communication]]
***[[Primitive_Comm|Primitive]]
***[[Comm_Mode|Mode]]
**[[Partition]]
**[[Distribution]]
**[[Composition]]
*[[Function Library]]
**[[Maths]]
**[[I/O]]
**[[Bits]]
**[[String]]
**[[System]]
035a317f0463aa0333158468fc0f70e555962388
81
80
2009-12-31T14:26:31Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[Core Mesham]]
**[[Types]]
**[[Sequential]]
**[[Parallel]]
**[[Procedures]]
**[[Preprocessor]]
*[[Type Library]]
*[[Function Library]]
202e162fb93dfed17cc4f2d6900a4deaecd45a8b
82
81
2009-12-31T14:27:24Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[Core Mesham]]
**[[Types]]
**[[Sequential]]
**[[Parallel]]
**[[Procedures]]
**[[Preprocessor]]
*[[Type Library]]
**[[Element Types]]
**[[Composite Types]]
*[[Function Library]]
e70ce9d183ffff659e6ebcca1b725edc208f2255
83
82
2009-12-31T15:21:46Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham]]
**[[Types]]
**[[:Category:Sequential]]
**[[:Category:Parallel]]
**[[Procedures]]
**[[:Category:Preprocessor]]
*[[:Category:Type Library]]
**[[:Category:Element Types]]
**[[:Category:Composite Types]]
*[[:Category:Function Library]]
3ebb7d8391fb342cbbbc3b20c4adb205c9850139
84
83
2009-12-31T15:22:42Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Procedures]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Composite Types|Composite Types]]
*[[:Category:Function Library|Function Library]]
0c3b0ff333bd52c778f5348faa979906f3c5faa6
What is Mesham
0
15
91
2009-12-31T14:57:24Z
Polas
1
Created page with '==Introduction== As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, w…'
wikitext
text/x-wiki
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, with the advent of Symmetric MultiProcessors (SMPs), a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
81a87f73220eaec8559fea433bd7f7dc802eb5a4
92
91
2009-12-31T14:57:45Z
Polas
1
/* Mesham */
wikitext
text/x-wiki
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, with the advent of Symmetric MultiProcessors (SMPs), a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
65dd42fc90cb37a4ef025373ab6400b085663fd4
Mesham parallel programming language:Copyrights
0
16
98
2009-12-31T15:00:06Z
Polas
1
Created page with 'The intelectual property of the Mesham programming language and associated compiler is copyrighted, no material may be reproduced without the permission of the owner'
wikitext
text/x-wiki
The intelectual property of the Mesham programming language and associated compiler is copyrighted, no material may be reproduced without the permission of the owner
5f868d11fd28bdcf55b06e8b118bc3f636c51a07
Introduction
0
17
101
2009-12-31T15:06:54Z
Polas
1
Created page with ' ==Why== Mesham was developed as a parallel programming language with a number of concepts in mind. From reviewing existing HPC languages it is obvious that programmers place a …'
wikitext
text/x-wiki
==Why==
Mesham was developed as a parallel programming language with a number of concepts in mind. From reviewing existing HPC languages it is obvious that programmers place a great deal of importance on both performance and resource usage. Due to these constraining factors, HPC code is often very complicated, laced with little efficiency tricks, which become difficult to maintain as time goes on. It is often the case that, existing HPC code (often written in C with a communications library) has reached a level of complexity that efficiency takes a hit.
==Advantages of Abstraction==
By abstracting the programmer from the low level details there are a number of advantages.
*Easier to understand code
*Quicker production time
*Portability easier to achieve
*Changes, such as data structure changes, are easier to make
*The rich parallel structure provides the compiler with lots of optimisation clues
==Important Features==
In order to produce a language which is usable by the current HPC programmers there are a number of features which we believe are critical to the language success.
*Simpler to code in
*Efficient Result
*Transparent Translation Process
*Portable
*Safe
*Expressive
==Where We Are==
This documentation, and the language, is very much work in progress. The documentation aims to both illustrate to a potential programmer the benefits of our language and approach and also to act as a reference for those using the language. There is much important development to be done on the language and tools in order to develop what has been created thus far
498cd6ffcea7b9e79ce29b7431c8d953d5da9c33
The Idea Behind Types
0
18
104
2009-12-31T15:12:43Z
Polas
1
Created page with '==A Type== The concept of a type will be familar to many programmers. A large subset of languages follow the syntax [Type] [Variablename], such as "int a" or "float b", to allow…'
wikitext
text/x-wiki
==A Type==
The concept of a type will be familar to many programmers. A large subset of languages follow the syntax [Type] [Variablename], such as "int a" or "float b", to allow the programmer to declare the variable. Such a statement affects both the compiler and runtime semantics - the compiler can perform analysis and optimisation (such as type checking) and in runtime the variable has a specific size and format. When we consider these sorts of languages, it can be thought of that the programmer provides information, to the compiler, via the type. However, there is only so much that one single type can reveal, and so languages often include numerous keywords in order to allow for the programmer to specify additional information. Taking C as an example, in order to declare a variable "m" to be a character in read only memory the programmer writes "const char m". In order to extend the language, and allow for extra variable attributes (such as where a variable is located in the parallel programming context) then new keywords would need to be introduced, which is less than ideal.
==Type Oriented Programming==
The approach adopted by Mesham is to allow the programmer to encode all variable information via the type system, by combining different types together to form a supertype (type chain.) In our language, "const char m" becomes "var m: Char :: const[]", where var m declares the variable, the operator ":" specifies the type and the operator "::" combines two types together. In this case, the supertype is that formed by combining the type Char with the type const. It should be noted that some type cohercions, such as "Int :: Char" are meaningless and so rules exist within each type to govern which combinations are allowed.
Type presidence is from right to left - in the example "Char :: const[]", it can be thought of that the read only attributes of const override the default read/write attributes of Char. Abstractly, the programmer can consider the supertype (type chain) formed to be a little bit like a linked list. For instance the supertype created by "A::B::C::D::E" is illustrated below.
[[File:images/types.jpg|Type Chain Illustration]]
==Advantages==
Using this approach many different attributes can be associated with a variable, the fact that types are loosely coupled means that the language designers can add attributes (types) with few problems, and by only changing the type of a variable the semantics can change considerably. Another advantage is that the rich information provided by the programmer allows for many optimisations to be performed during compilation that using a lower level language might not be obvious to the compiler.
==Technically==
On a more technical note, the type system implements a number of services. These are called by the core of the compiler and if the specific type does not honour that service, then the call is passed onto the next in the chain - until all are exhausted. For instance, using the types "A::B::C::D::E", if service "Q1" was called, then type "E" would be asked first, if it did not honour the service, "Q1" would be passed to type "D" - if that type did not honour it then it would be passed to type "C" and so forth.
899dbd26cd6955944d74e6b1f5d05575c83d92ea
105
104
2009-12-31T15:18:57Z
Polas
1
wikitext
text/x-wiki
==A Type==
The concept of a type will be familar to many programmers. A large subset of languages follow the syntax [Type] [Variablename], such as "int a" or "float b", to allow the programmer to declare the variable. Such a statement affects both the compiler and runtime semantics - the compiler can perform analysis and optimisation (such as type checking) and in runtime the variable has a specific size and format. When we consider these sorts of languages, it can be thought of that the programmer provides information, to the compiler, via the type. However, there is only so much that one single type can reveal, and so languages often include numerous keywords in order to allow for the programmer to specify additional information. Taking C as an example, in order to declare a variable "m" to be a character in read only memory the programmer writes "const char m". In order to extend the language, and allow for extra variable attributes (such as where a variable is located in the parallel programming context) then new keywords would need to be introduced, which is less than ideal.
==Type Oriented Programming==
The approach adopted by Mesham is to allow the programmer to encode all variable information via the type system, by combining different types together to form a supertype (type chain.) In our language, "const char m" becomes "var m: Char :: const[]", where var m declares the variable, the operator ":" specifies the type and the operator "::" combines two types together. In this case, the supertype is that formed by combining the type Char with the type const. It should be noted that some type cohercions, such as "Int :: Char" are meaningless and so rules exist within each type to govern which combinations are allowed.
Type presidence is from right to left - in the example "Char :: const[]", it can be thought of that the read only attributes of const override the default read/write attributes of Char. Abstractly, the programmer can consider the supertype (type chain) formed to be a little bit like a linked list. For instance the supertype created by "A::B::C::D::E" is illustrated below.
<center>[[File:types.jpg|Type Chain Illustration]]</center>
==Advantages==
Using this approach many different attributes can be associated with a variable, the fact that types are loosely coupled means that the language designers can add attributes (types) with few problems, and by only changing the type of a variable the semantics can change considerably. Another advantage is that the rich information provided by the programmer allows for many optimisations to be performed during compilation that using a lower level language might not be obvious to the compiler.
==Technically==
On a more technical note, the type system implements a number of services. These are called by the core of the compiler and if the specific type does not honour that service, then the call is passed onto the next in the chain - until all are exhausted. For instance, using the types "A::B::C::D::E", if service "Q1" was called, then type "E" would be asked first, if it did not honour the service, "Q1" would be passed to type "D" - if that type did not honour it then it would be passed to type "C" and so forth.
e8f62b4f36bc354cdf551bb92294e41e7b6d9c20
File:Types.jpg
6
19
108
2009-12-31T15:15:28Z
Polas
1
Type Chain formed when combining types A::B::C::D::E
wikitext
text/x-wiki
Type Chain formed when combining types A::B::C::D::E
f1c13468bdd6fb5b43f265520ee5b5f847894873
Category:Core Mesham
14
20
110
2009-12-31T15:23:18Z
Polas
1
Created page with '[[Category:Core Mesham]]'
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
111
110
2009-12-31T15:24:54Z
Polas
1
wikitext
text/x-wiki
[[Category:Sequential]]
[[Category:Parallel]]
[[Category:Preprocessor]]
[[Procedures]]
fd7183b409672039d6675df041104ad081e93a7c
112
111
2009-12-31T15:25:17Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Sequential
14
21
114
2009-12-31T15:26:34Z
Polas
1
Created page with '[[Category:Core Mesham]]'
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Category:Parallel
14
22
116
2009-12-31T15:27:03Z
Polas
1
Created page with '[[Category:Core Mesham]]'
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Category:Preprocessor
14
23
118
2009-12-31T15:27:34Z
Polas
1
Created page with '[[Category:Core Mesham]]'
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Declaration
0
24
120
2009-12-31T15:30:37Z
Polas
1
Created page with '[[Category:Sequential]] ==Syntax== Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used. var [varname]; va…'
wikitext
text/x-wiki
[[Category:Sequential]]
==Syntax==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];
var [varname]:=[Value];
var [varname]:[Type];
==Semantics==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
==Examples==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
9feaa3d6ad993efad39618da35832242f9feac1f
121
120
2009-12-31T15:32:25Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
== Examples ==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
b2c2062bea9e6eaf630818dbc0cecab1278531ad
122
121
2009-12-31T15:33:47Z
Polas
1
wikitext
text/x-wiki
==Syntax==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
==Semantics==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
==Examples==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
a6ddad249a2a0c0da1ff4e2c901ca5688e57bc25
123
122
2009-12-31T15:34:12Z
Polas
1
wikitext
text/x-wiki
== Compilers ==
==Syntax==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
==Semantics==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
==Examples==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
7fc999c348dfa0ac0a6e5126ed99c0c21bf4d758
124
123
2009-12-31T15:34:36Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
==Semantics==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
==Examples==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
a27fd47db8133a0044d12652b29db0340529b853
125
124
2009-12-31T15:40:27Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the "var" keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
== Examples ==
var a;<br>
var b:=99;<br>
a:="hello";<br>
var t:Char;<br>
var z:Char :: allocated[single[on[2]]];<br>
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
a8dc997f231763db580a72cd6498255121c319d2
Declaration
0
24
126
125
2009-12-31T15:41:50Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
== Examples ==
var a;<br>
var b:=99;<br>
a:="hello";<br>
var t:Char;<br>
var z:Char :: allocated[single[on[2]]];<br>
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
d64b1aac7c5daf956ca2243b1cfb52e24d997e2b
127
126
2009-12-31T15:43:11Z
Polas
1
moved [[Variable Declaration]] to [[Declaration]]
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
== Examples ==
var a;<br>
var b:=99;<br>
a:="hello";<br>
var t:Char;<br>
var z:Char :: allocated[single[on[2]]];<br>
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
d64b1aac7c5daf956ca2243b1cfb52e24d997e2b
128
127
2009-12-31T15:57:10Z
Polas
1
/* Examples */
wikitext
text/x-wiki
== Syntax ==
Variable declaration is a key part to any language. In order to declare a variable in Mesham the ''var'' keyword is used.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
In the case of a value being specified, the compiler will infer the type via type inference, also making the assumption that the variable is allocated on each process.
== Examples ==
var a;
var b:=99;
a:="hello";
var t:Char;
var z:Char :: allocated[single[on[2]]];
In the code example above, the variable "a" is declared, without any further information the type is infered by its first use (to hold type String.) Variable "b" is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes. "t" is declared to be a character, without further type information it is also assumed to be on all processes (the type Char is automatically coherced with the type allocated[multiple[]]. Lastly, the variable "z" is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
d2862e4197be4b1efa1c5d8911cdf032eae14a3d
Variable Declaration
0
25
134
2009-12-31T15:43:11Z
Polas
1
moved [[Variable Declaration]] to [[Declaration]]
wikitext
text/x-wiki
#REDIRECT [[Declaration]]
3b8c12aa0b78726af77da60c9e428dc5b3648955
Assignment
0
26
136
2009-12-31T15:47:11Z
Polas
1
Created page with '==Syntax== In order to assign a value to a variable then the programmer will need to use variable assignment. [lvalue]:=[rvalue]; Where ''lvalue'' is a variable and ''rvalue''…'
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a variable and ''rvalue'' a variable or expression
== Semantics==
Will assign a value to a variable
== Examples==
var i:=4;<br>
var j:=i;
[[Category:sequential]]
f2c26ea8cc74570f9508de104c8e815809eaab3a
137
136
2009-12-31T15:56:38Z
Polas
1
/* Examples */
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a variable and ''rvalue'' a variable or expression
== Semantics==
Will assign a value to a variable
== Examples==
var i:=4;
var j:=i;
In this example the variable ''i'' will be declared and set to value 4, and the variable ''j'' also declared and set to the value of ''i'' (4.) Via type inference the types of both variables will be that of ''Int''
[[Category:sequential]]
e92a8cd1d75bf56fd19288b5e209b1fddea1bbce
For
0
27
142
2009-12-31T15:55:02Z
Polas
1
Created page with '== Syntax == for i from a to b forbody; == Semantics == The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will …'
wikitext
text/x-wiki
== Syntax ==
for i from a to b forbody;
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
var i;
for i from 0 to 9
{
print[i];
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
d7d241a010aa55092c6973b175aa3bb3a7245119
143
142
2009-12-31T15:55:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
for i from a to b forbody;
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
var i;
for i from 0 to 9
{
print[i];
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
[[Category:sequential]]
d15be274c53aa780a82d14c3576963c00bdd8796
MediaWiki:Monobook.css
8
3
6
5
2009-12-31T16:00:07Z
Polas
1
css
text/css
/* CSS placed here will affect users of the Monobook skin */
c80e68799afe5db6daf381b96bd702ff641b8092
7
6
2009-12-31T16:00:56Z
Polas
1
css
text/css
/* CSS placed here will affect users of the Monobook skin */
#ca-edit { display: none; }
d1e56f596937430f27e759fe45a4c0e8dabde0f9
While
0
28
149
2009-12-31T16:03:12Z
Polas
1
Created page with '==Syntax== while (condition) whilebody; ==Semantics== Will loop whilst the condition holds. == Examples == var a:=10; while (a > 0) { a:=a - 1; }; Will loop, each t…'
wikitext
text/x-wiki
==Syntax==
while (condition) whilebody;
==Semantics==
Will loop whilst the condition holds.
== Examples ==
var a:=10;
while (a > 0)
{
a:=a - 1;
};
Will loop, each time decreasing the value of variable ''a'' by 1 until the value is too small (0)
[[Category:Sequential]]
f460d9a151c393d515cff5bb8a862c47591e1bf2
Break
0
29
154
2009-12-31T16:04:41Z
Polas
1
Created page with '== Syntax == break; == Semantics == Will break out of the current enclosing loop body == Example == while (true) break; Only one iteration of the loop will complete, where…'
wikitext
text/x-wiki
== Syntax ==
break;
== Semantics ==
Will break out of the current enclosing loop body
== Example ==
while (true) break;
Only one iteration of the loop will complete, where it will break out of the body
[[Category:sequential]]
3776b3c2d33b8983692afbecc8cbe15eb85f107b
Try
0
30
159
2009-12-31T16:11:43Z
Polas
1
Created page with '== Syntax == try<br> {<br> try body<br> } catch (error string) { <br> error handing code<br> }<br> == Semantics == Will execute the code in the try body and handle any errors. …'
wikitext
text/x-wiki
== Syntax ==
try<br>
{<br>
try body<br>
} catch (error string) { <br>
error handing code<br>
}<br>
== Semantics ==
Will execute the code in the try body and handle any errors. This is very important in parallel computing as it allows the programmer to easily deal with any communication errors that may occur.
== Error Strings ==
There are a number of error strings build into Mesham, additional ones can be specified by the programmer.
*Array Bounds - Accessing an array outside its bounds
*Divide by zero - Divide by zero error
*Memory Out - Memory allocation failure
*root Illegal - root process in communication
*rank Illegal - rank in communication
*buffer Illegal - buffer in communication
*count - Count wrong in communication
*type - Communication type error
*comm - Communication communicator error
*truncate - Truncation error in communication
*Group - Illegal group in communication
*op - Illegal operation for communication
*arg - Arguments used for communication incorrect
== Example ==
try
{
print[a#12];
} catch ("Array Bounds") {
print["No Such Index\n"];
};
In this example the programmer is trying to access element 12 of array ''a''. If this does not exist, then instead of that element being displayed an error message is put on the screen.
[[Category:sequential]]
0c5613def6ab75a31e5086f790ac47d1d6b6330d
Throw
0
31
164
2009-12-31T16:13:44Z
Polas
1
Created page with '== Syntax == throw errorstring; == Semantics == Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with…'
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
try
{
throw "an error"
} catch "an error" {
print["Error occurred!\n"];
};
In this example, a programmer defined error ''an error'' is thrown and caught.
[[Category:sequential]]
b9594ee90102f07546a8e9bc34d585988d660b5c
If
0
32
170
2009-12-31T16:18:13Z
Polas
1
Created page with '== Syntax == if (condition)<br> {<br> then body<br> } else {<br> else body<br> };<br> == Semantics == Will evaluate the condition and, if true will execute the code in the ''t…'
wikitext
text/x-wiki
== Syntax ==
if (condition)<br>
{<br>
then body<br>
} else {<br>
else body<br>
};<br>
== Semantics ==
Will evaluate the condition and, if true will execute the code in the ''then body.'' Optionally, if the condition is false then the code in the ''else body'' will be executed if this has been supplied by the programmer.
== Example ==
if (a==b)
{
print["Equal"];
};
In this code example two variables ''a'' and ''b'' are tested for equality. If equal then the message will be displayed. As no else section has been specified then no specific behaviour will be adopted if they are unequal
[[Category:sequential]]
77105c338d8603e21747b5bcc83b10cca8128637
Conditional
0
33
175
2009-12-31T16:19:20Z
Polas
1
Redirected page to [[If]]
wikitext
text/x-wiki
#REDIRECT [[If]]
[[Category:sequential]]
258cc19502efae8a52206b699a9b0541ac6fc6ca
Sequential Composition
0
34
177
2009-12-31T16:25:03Z
Polas
1
Created page with '== Syntax == body ; body == Semantics == Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequenti…'
wikitext
text/x-wiki
== Syntax ==
body ; body
== Semantics ==
Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequential composition.
== Examples ==
var a:=12 ; a:=99
In the above example variable ''a'' is declared to be equal to 12, after this the variable is then modified to hold the value of 99.
function1[] ; function2[]
In the second example ''function1'' will execute and then after (if it terminates) the function ''function2'' will be called
[[category:sequential]]
9267f50b9fea34069576bf9d1b1a1aab267d0aa7
How To Edit
0
35
182
2009-12-31T17:04:51Z
Polas
1
Created page with '==Before you start== In order to edit the riscos.info wiki, you will need to log in. Before you can log in, you must first create an account. This is a simple process – jus…'
wikitext
text/x-wiki
==Before you start==
In order to edit the riscos.info wiki, you will need to log in. Before you can log in, you must first create an account. This is a simple process – just go to the [[Special:Userlogin|Login page]] and enter the relevant details. Having created an account and logged in, you can then make whatever changes you please throughout most of the wiki. (There are a few sections where only trusted users with greater privileges can make changes.)
Be warned that after a certain amount of inactivity, you will automatically be logged out again. The cutoff is approximately an hour. If you are making anything more than trivial changes, it is better to write them in an external text editor, then cut-and-paste them into place. This reduces the risk that you will lose your work.
== How to edit a Wiki ==
''NB: This is meant as a getting started guide to wiki editing. For a complete list of commands, visit http://en.wikipedia.org/wiki/Help:Editing''
*Every page will have at least one blue '''Edit''' link in a tab at the top of the page (with the exception of certain locked pages).
*Clicking this button when logged in takes you to the editing page
Once at the Editing page, you'll need to know about the format of Wikis.
Generally, everything is designed to be straightforward. riscos.info uses the same [http://meta.wikimedia.org/wiki/MediaWiki MediaWiki] software as [http://www.wikipedia.org/ Wikipedia], so more information can be found reading the [http://meta.wikimedia.org/wiki/Help:Contents MediaWiki Handbook].
=== Formatting ===
Normal text only needs to be typed.
A single new line
doesn't create
a break.
An empty line starts a new paragraph.
*Lines starting with * create lists. Multiple *s nest the list
*Lines starting with # create a numbered list. Using ## and ### will add numbered subsections split with periods.
*Apostrophes can be used to add emphasis. Use the same number of apostrophes to turn the emphasis off again at the end of the section.
**Two apostrophes will put text in italics: <nowiki>''some text''</nowiki> – ''some text''
**Three apostrophes will put text in bold: <nowiki>'''some more text'''</nowiki> – '''some more text'''
**Five apostrophes will put text in bold italics: <nowiki>'''''and some more'''''</nowiki> – '''''and some more'''''
*Sections can be marked by putting the = symbol around the heading. The more = signs are used, the lower-level the heading produced:
**<nowiki>==Main Heading==</nowiki>
**<nowiki>===Sub-heading===</nowiki>
**<nowiki>====Smaller sub-heading====</nowiki>
*Some standard HTML codes can also be used: <nowiki><b></nowiki><b>bold</b><nowiki></b></nowiki> <nowiki><font color="red"></nowiki><font color="red">red</font><nowiki></font></nowiki> Please use these sparingly. However, if you want some text to be in single quotes and italics, <nowiki><i>'quotes and italics'</i></nowiki> produces <i>'quotes and italics'</i> while three quotes would produce <nowiki>'''bold instead'''</nowiki> – '''bold instead'''.
*HTML glyphs – &pound; £, &OElig; Œ, &deg; °, &pi; π etc. may also be used. (The <nowiki><nowiki> and </nowiki></nowiki> tags do not affect these.)
**The ampersand (&) '''must''' be written with the &amp; glyph.
*To override the automatic wiki reformating, surround the text that you do ''not'' want formatted with the <nowiki><nowiki></nowiki> and <nowiki></nowiki></nowiki> tags.
*A line across the page can be produced with four - signs on a blank line:
<nowiki>----</nowiki>
----
*Entries may be signed and dated (recommended for comments on talk pages) with four tildes: <nowiki>~~~~</nowiki> [[User:Simon Smith|Simon Smith]] 02:05, 25 May 2007 (BST)
=== Linking and adding pictures ===
To link to another article within the wiki, eg: [[RISC OS]], type double brackets around the page you want to link to, as follows: <nowiki>[[Page name here]]</nowiki>. If the page you refer to already exists, <nowiki>[[Page name here]]</nowiki> will appear as a blue clickable link. Otherwise, it will appear as a red 'non-existent link', and following it will allow you to create the associated page.
To add a picture, use a link of the form <nowiki>[[Image:image name here|alternative text here]]</nowiki>. For example, <nowiki>[[Image:zap34x41.png|Zap icon]]</nowiki> gives the Zap application icon: [[Image:zap34x41.png|Zap icon]]
There is a summary [[Special:Imagelist|list of uploaded files]] available, and a [[Special:Newimages|gallery of new image files]].
To link to an external URL, type the URL directly, including the leading <nowiki>'http://'</nowiki>, as follows: http://riscos.com. To change how a link to an external URL appears, type ''single'' brackets around the URL, and separate the URL from the alternative text with a space. For example, <nowiki>[http://riscos.com Text to appear]</nowiki> gives [http://riscos.com Text to appear]. As an anti-spamming measure, you will have to enter a CAPTCHA code whenever you add a new link to an external page. The following link gives [[Special:Captcha/help|further information on CAPTCHAs]].
When providing a link, try to make the clickable part self-descriptive. For example, 'The following link gives [[Special:Captcha/help|further information on CAPTCHAs]]' is preferable to 'For further information on CAPTCHAs, click [[Special:Captcha/help|here]]'. A link that says 'click here' is only understandable in context, and users may not be able to tell where the link will send them until they click on it.
If you link to a page that doesn't exist, following the link will send you to a blank page template, allowing you to edit and thus create the new page: [[A page that doesn't exist]].
If you wanted to link to another Wiki article ''X'', but display text ''Y'', use a 'piped link'. Type the name of the page first, then a pipe symbol, then the alternative text. For example, <nowiki>[[RISC OS|Front page]]</nowiki> gives [[RISC OS|Front page]].
=== General Advice ===
The [[RISC OS|front page]] has several categories listed on it. While this list can grow, if your article can fit in one of these categories, then go to the category page in question and add a link to it.
When creating a new page, make use of the ''Preview'' button to avoid filling up the change log with lots of revisions to your new article and always include some information in the 'Summary' box to help others see what's happened in the change log.
If you think a page should exist, but you don't have time to create it, link to it anyway. People are far more likely to fill in blanks if they can just follow a link than if they have to edit links all over the place.
Above all, keep it factual, professional and clean. If you don't, you are liable to be banned from further contribution, and someone will fix your errors anyway! As the disclaimer says: ''''If you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here.'''' [http://www.wikipedia.org Wikipedia] is proof that the idea works, and works well.
=== Brief Style Guide ===
This subsection gives a brief summary of the style conventions suggested for use throughout the wiki.
* Terms which are particularly important to an entry should have links provided. Terms of only minor relevance should not be linked. It is only necessary to provide a link the first time a related term is used, not every time it appears. Additional links may still be added in longer entries and in any other cases where readers are likely to find it helpful.
* Write out unusual abbreviations in full the first time they are used within each article, and then give the abbreviation within parentheses. (For example: 'Programmer's Reference Manual (PRM)'.) Thereafter, use the abbreviation without further comment. In 'general' articles, the threshold for what is considered an unusual abbreviation will be lower than in 'technical' articles.
* When linking to a compound term include the full term inside the link (rather than part of the term inside the link, part outside) and if necessary use the pipe ('|') symbol to provide more suitable alternative text. For example, use "''[[Martin Wuerthner|Martin Wuerthner's]] applications include …''" rather than "''[[Martin Wuerthner]]'s applications include …''"
* Try to ensure that every link (briefly) describes its contents. Avoid sentences that say, 'To find out about XYZ, [[A page that doesn't exist|click here]]'; instead use sentences of the form, 'Follow this link [[A page that doesn't exist|to find out about XYZ]]'.
* As far as possible use the Wiki codes for bold, italic, lists, etc. rather than inserting HTML markup.
* Use single quotes in preference to double quotes except when quoting a person's actual words.
* Write single-digit numbers in words, numbers of 13 or more as numbers. The numbers 10-12 represent a grey area where either convention may be used as seems appropriate. The best guide it to stay consistent within a particular section of a document. Number ranges and numbers with decimal fractions should always be written as numbers.
* Use HTML glyphs for specialist symbols. Do not forget the trailing semicolon – while most browsers will still display the glyph even if the semicolon is missing, this is not guaranteed to work reliably. Of the sample glyphs given, the ampersand, quotes, and the less than and greater than symbols are the least critical, because the Wiki software will usually automatically alter them to the correct forms. A Google search for [http://www.google.co.uk/search?hl=en&ie=ISO-8859-1&q=HTML+glyphs&btnG=Google+Search&meta= HTML glyphs] gives several useful summaries. Some commonly-used glyphs are given below:
**ampersand : & : &amp;
**dashes : — – : &mdash; &ndash;
**double quotes : " : &quot;
**ellipsis : … : &hellip;
**hard space : : &nbsp;
**less than, greater than : < > : &lt; &gt;
**pound : £ : &pound;
**superscripts : ² ³ : &sup2; &sup3;
* Avoid contractions (it's, doesn't) and exclamations.
* When giving a list of items, provide the entries in ascending alphabetical order unless there is some other more compelling sequence.
* When leaving comments on discussion pages, sign them with four tildes – <nowiki>~~~~</nowiki>. This adds your user name and the time and date.
* In general, the desired tone for the RISC OS wiki is similar to that of a RISC OS magazine. However, highly technical articles should be written to have the same tone and style as the entries in the [[RISC OS Documentation|RISC OS Programmer's Reference Manuals]].
=== Templates ===
Templates allow information to be displayed in the same format on different, related, pages (such as the info box on [http://en.wikipedia.org/wiki/RISC_OS this Wikipedia page]), or to link together related articles (such as the box on [[QEMU|this page]]).
See this [http://home.comcast.net/~gerisch/MediaWikiTemplates.html Getting Started HOWTO], or try editing a [http://en.wikipedia.org/wiki/Wikipedia:Template_messages Wikipedia template] to see the source for an existing example.
The main templates in use within the RISCOS.info Wiki are the [[Template:Application|Application]] and [[Template:Applicationbox|Applicationbox]] templates. Instructions on how to use them are given on their associated talk pages. A couple of Infobox templates have also been set up, but these do not require per-use customisation.
* [[Template_talk:Application|How to use the Application template]]
* [[Template_talk:Applicationbox|How to use the Applicationbox template]]
* [http://www.riscos.info/index.php?title=Special%3AAllpages&from=&namespace=10 List of current templates]
== Talk Pages ==
Every wiki page has a [http://www.mediawiki.org/wiki/Help:Talk_pages Talk page] associated with it. It can be reached through the ''discussion'' tab at the the top of the page.
The Talk page is useful for remarks, questions or discussions about the main page. By keeping these on the Talk page, the main page can focus on factual information.
Please observe the following conventions when writing on the Talk page (for a full description see the [http://www.mediawiki.org/wiki/Help:Talk_pages MediaWiki page on Talk pages]):
*Always sign your name after your comments using four tildes '<tt><nowiki>~~~~</nowiki></tt>'. This will expand to your name and a date stamp. Preferably preceed this signature with two dashes and a space: '<tt><nowiki>-- ~~~~</nowiki></tt>'.
*Start a new subject with a <tt><nowiki>== Level 2 Heading ==</nowiki></tt> at the bottom of the page.
*Indent replies with a colon ('<tt>:</tt>') at the beginning of the line. Use multiple colons for deeper indents. Keep your text on one line in the source for this to work. If you really must have more than one paragraph, start that paragraph with a blank line and a new set of colons.
*Unlike in the normal wiki pages, normally you should not edit text written by others.
== Moderating Others' Work ==
If you spot a mistake in someone else's work, correct it, but make a note in the 'Summary' box stating the reason for the change, eg: ''Fixed speeling mistooks''.
If you feel you can add useful information to an existing page, then add it. If you feel something should be removed, remove it, but state why in the 'Summary' box. If it's a point of contention, use the article [[#Talk Pages|talk page]] to start a talk about it.
Before removing or making significant changes to someone else's contribution, consider the [http://meta.wikimedia.org/wiki/Help:Reverting#When_to_revert guidance on "reverting"] from wikimedia.
== Reverting spam ==
Administrators can make use of a [http://en.wikipedia.org/wiki/Wikipedia:Rollback_feature fast rollback facility]. Bring up the article, then click on the History tab. Select the version you wish to rollback to in the first column, and the current version in the second. Click 'compare selected versions'. In the second column will be a 'Rollback' link: click this to rollback. It will also place a comment in the log denoting the rollback.
Reverting when not an administrator is slightly more complicated - see [http://en.wikipedia.org/wiki/Help:Reverting#How_to_revert instructions how to revert].
05e7c63bb6df9bbac32dcd414076ceaf56f62da8
183
182
2009-12-31T17:05:51Z
Polas
1
wikitext
text/x-wiki
==Before you start==
In order to edit the Mesham wiki, you will need to log in. Before you can log in, you must first create an account. This is a simple process – just go to the [[Special:Userlogin|Login page]] and enter the relevant details. Having created an account and logged in, you can then make whatever changes you please throughout most of the wiki. (There are a few sections where only trusted users with greater privileges can make changes.)
Be warned that after a certain amount of inactivity, you will automatically be logged out again. The cutoff is approximately an hour. If you are making anything more than trivial changes, it is better to write them in an external text editor, then cut-and-paste them into place. This reduces the risk that you will lose your work.
== How to edit a Wiki ==
''NB: This is meant as a getting started guide to wiki editing. For a complete list of commands, visit http://en.wikipedia.org/wiki/Help:Editing''
*Every page will have at least one blue '''Edit''' link in a tab at the top of the page (with the exception of certain locked pages).
*Clicking this button when logged in takes you to the editing page
Once at the Editing page, you'll need to know about the format of Wikis.
Generally, everything is designed to be straightforward. riscos.info uses the same [http://meta.wikimedia.org/wiki/MediaWiki MediaWiki] software as [http://www.wikipedia.org/ Wikipedia], so more information can be found reading the [http://meta.wikimedia.org/wiki/Help:Contents MediaWiki Handbook].
=== Formatting ===
Normal text only needs to be typed.
A single new line
doesn't create
a break.
An empty line starts a new paragraph.
*Lines starting with * create lists. Multiple *s nest the list
*Lines starting with # create a numbered list. Using ## and ### will add numbered subsections split with periods.
*Apostrophes can be used to add emphasis. Use the same number of apostrophes to turn the emphasis off again at the end of the section.
**Two apostrophes will put text in italics: <nowiki>''some text''</nowiki> – ''some text''
**Three apostrophes will put text in bold: <nowiki>'''some more text'''</nowiki> – '''some more text'''
**Five apostrophes will put text in bold italics: <nowiki>'''''and some more'''''</nowiki> – '''''and some more'''''
*Sections can be marked by putting the = symbol around the heading. The more = signs are used, the lower-level the heading produced:
**<nowiki>==Main Heading==</nowiki>
**<nowiki>===Sub-heading===</nowiki>
**<nowiki>====Smaller sub-heading====</nowiki>
*Some standard HTML codes can also be used: <nowiki><b></nowiki><b>bold</b><nowiki></b></nowiki> <nowiki><font color="red"></nowiki><font color="red">red</font><nowiki></font></nowiki> Please use these sparingly. However, if you want some text to be in single quotes and italics, <nowiki><i>'quotes and italics'</i></nowiki> produces <i>'quotes and italics'</i> while three quotes would produce <nowiki>'''bold instead'''</nowiki> – '''bold instead'''.
*HTML glyphs – &pound; £, &OElig; Œ, &deg; °, &pi; π etc. may also be used. (The <nowiki><nowiki> and </nowiki></nowiki> tags do not affect these.)
**The ampersand (&) '''must''' be written with the &amp; glyph.
*To override the automatic wiki reformating, surround the text that you do ''not'' want formatted with the <nowiki><nowiki></nowiki> and <nowiki></nowiki></nowiki> tags.
*A line across the page can be produced with four - signs on a blank line:
<nowiki>----</nowiki>
----
*Entries may be signed and dated (recommended for comments on talk pages) with four tildes: <nowiki>~~~~</nowiki> [[User:Simon Smith|Simon Smith]] 02:05, 25 May 2007 (BST)
=== Linking and adding pictures ===
To link to another article within the wiki, eg: [[RISC OS]], type double brackets around the page you want to link to, as follows: <nowiki>[[Page name here]]</nowiki>. If the page you refer to already exists, <nowiki>[[Page name here]]</nowiki> will appear as a blue clickable link. Otherwise, it will appear as a red 'non-existent link', and following it will allow you to create the associated page.
To add a picture, use a link of the form <nowiki>[[Image:image name here|alternative text here]]</nowiki>. For example, <nowiki>[[Image:zap34x41.png|Zap icon]]</nowiki> gives the Zap application icon: [[Image:zap34x41.png|Zap icon]]
There is a summary [[Special:Imagelist|list of uploaded files]] available, and a [[Special:Newimages|gallery of new image files]].
To link to an external URL, type the URL directly, including the leading <nowiki>'http://'</nowiki>, as follows: http://riscos.com. To change how a link to an external URL appears, type ''single'' brackets around the URL, and separate the URL from the alternative text with a space. For example, <nowiki>[http://riscos.com Text to appear]</nowiki> gives [http://riscos.com Text to appear]. As an anti-spamming measure, you will have to enter a CAPTCHA code whenever you add a new link to an external page. The following link gives [[Special:Captcha/help|further information on CAPTCHAs]].
When providing a link, try to make the clickable part self-descriptive. For example, 'The following link gives [[Special:Captcha/help|further information on CAPTCHAs]]' is preferable to 'For further information on CAPTCHAs, click [[Special:Captcha/help|here]]'. A link that says 'click here' is only understandable in context, and users may not be able to tell where the link will send them until they click on it.
If you link to a page that doesn't exist, following the link will send you to a blank page template, allowing you to edit and thus create the new page: [[A page that doesn't exist]].
If you wanted to link to another Wiki article ''X'', but display text ''Y'', use a 'piped link'. Type the name of the page first, then a pipe symbol, then the alternative text. For example, <nowiki>[[RISC OS|Front page]]</nowiki> gives [[RISC OS|Front page]].
=== General Advice ===
The [[RISC OS|front page]] has several categories listed on it. While this list can grow, if your article can fit in one of these categories, then go to the category page in question and add a link to it.
When creating a new page, make use of the ''Preview'' button to avoid filling up the change log with lots of revisions to your new article and always include some information in the 'Summary' box to help others see what's happened in the change log.
If you think a page should exist, but you don't have time to create it, link to it anyway. People are far more likely to fill in blanks if they can just follow a link than if they have to edit links all over the place.
Above all, keep it factual, professional and clean. If you don't, you are liable to be banned from further contribution, and someone will fix your errors anyway! As the disclaimer says: ''''If you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here.'''' [http://www.wikipedia.org Wikipedia] is proof that the idea works, and works well.
=== Brief Style Guide ===
This subsection gives a brief summary of the style conventions suggested for use throughout the wiki.
* Terms which are particularly important to an entry should have links provided. Terms of only minor relevance should not be linked. It is only necessary to provide a link the first time a related term is used, not every time it appears. Additional links may still be added in longer entries and in any other cases where readers are likely to find it helpful.
* Write out unusual abbreviations in full the first time they are used within each article, and then give the abbreviation within parentheses. (For example: 'Programmer's Reference Manual (PRM)'.) Thereafter, use the abbreviation without further comment. In 'general' articles, the threshold for what is considered an unusual abbreviation will be lower than in 'technical' articles.
* When linking to a compound term include the full term inside the link (rather than part of the term inside the link, part outside) and if necessary use the pipe ('|') symbol to provide more suitable alternative text. For example, use "''[[Martin Wuerthner|Martin Wuerthner's]] applications include …''" rather than "''[[Martin Wuerthner]]'s applications include …''"
* Try to ensure that every link (briefly) describes its contents. Avoid sentences that say, 'To find out about XYZ, [[A page that doesn't exist|click here]]'; instead use sentences of the form, 'Follow this link [[A page that doesn't exist|to find out about XYZ]]'.
* As far as possible use the Wiki codes for bold, italic, lists, etc. rather than inserting HTML markup.
* Use single quotes in preference to double quotes except when quoting a person's actual words.
* Write single-digit numbers in words, numbers of 13 or more as numbers. The numbers 10-12 represent a grey area where either convention may be used as seems appropriate. The best guide it to stay consistent within a particular section of a document. Number ranges and numbers with decimal fractions should always be written as numbers.
* Use HTML glyphs for specialist symbols. Do not forget the trailing semicolon – while most browsers will still display the glyph even if the semicolon is missing, this is not guaranteed to work reliably. Of the sample glyphs given, the ampersand, quotes, and the less than and greater than symbols are the least critical, because the Wiki software will usually automatically alter them to the correct forms. A Google search for [http://www.google.co.uk/search?hl=en&ie=ISO-8859-1&q=HTML+glyphs&btnG=Google+Search&meta= HTML glyphs] gives several useful summaries. Some commonly-used glyphs are given below:
**ampersand : & : &amp;
**dashes : — – : &mdash; &ndash;
**double quotes : " : &quot;
**ellipsis : … : &hellip;
**hard space : : &nbsp;
**less than, greater than : < > : &lt; &gt;
**pound : £ : &pound;
**superscripts : ² ³ : &sup2; &sup3;
* Avoid contractions (it's, doesn't) and exclamations.
* When giving a list of items, provide the entries in ascending alphabetical order unless there is some other more compelling sequence.
* When leaving comments on discussion pages, sign them with four tildes – <nowiki>~~~~</nowiki>. This adds your user name and the time and date.
* In general, the desired tone for the RISC OS wiki is similar to that of a RISC OS magazine. However, highly technical articles should be written to have the same tone and style as the entries in the [[RISC OS Documentation|RISC OS Programmer's Reference Manuals]].
=== Templates ===
Templates allow information to be displayed in the same format on different, related, pages (such as the info box on [http://en.wikipedia.org/wiki/RISC_OS this Wikipedia page]), or to link together related articles (such as the box on [[QEMU|this page]]).
See this [http://home.comcast.net/~gerisch/MediaWikiTemplates.html Getting Started HOWTO], or try editing a [http://en.wikipedia.org/wiki/Wikipedia:Template_messages Wikipedia template] to see the source for an existing example.
The main templates in use within the RISCOS.info Wiki are the [[Template:Application|Application]] and [[Template:Applicationbox|Applicationbox]] templates. Instructions on how to use them are given on their associated talk pages. A couple of Infobox templates have also been set up, but these do not require per-use customisation.
* [[Template_talk:Application|How to use the Application template]]
* [[Template_talk:Applicationbox|How to use the Applicationbox template]]
* [http://www.riscos.info/index.php?title=Special%3AAllpages&from=&namespace=10 List of current templates]
== Talk Pages ==
Every wiki page has a [http://www.mediawiki.org/wiki/Help:Talk_pages Talk page] associated with it. It can be reached through the ''discussion'' tab at the the top of the page.
The Talk page is useful for remarks, questions or discussions about the main page. By keeping these on the Talk page, the main page can focus on factual information.
Please observe the following conventions when writing on the Talk page (for a full description see the [http://www.mediawiki.org/wiki/Help:Talk_pages MediaWiki page on Talk pages]):
*Always sign your name after your comments using four tildes '<tt><nowiki>~~~~</nowiki></tt>'. This will expand to your name and a date stamp. Preferably preceed this signature with two dashes and a space: '<tt><nowiki>-- ~~~~</nowiki></tt>'.
*Start a new subject with a <tt><nowiki>== Level 2 Heading ==</nowiki></tt> at the bottom of the page.
*Indent replies with a colon ('<tt>:</tt>') at the beginning of the line. Use multiple colons for deeper indents. Keep your text on one line in the source for this to work. If you really must have more than one paragraph, start that paragraph with a blank line and a new set of colons.
*Unlike in the normal wiki pages, normally you should not edit text written by others.
== Moderating Others' Work ==
If you spot a mistake in someone else's work, correct it, but make a note in the 'Summary' box stating the reason for the change, eg: ''Fixed speeling mistooks''.
If you feel you can add useful information to an existing page, then add it. If you feel something should be removed, remove it, but state why in the 'Summary' box. If it's a point of contention, use the article [[#Talk Pages|talk page]] to start a talk about it.
Before removing or making significant changes to someone else's contribution, consider the [http://meta.wikimedia.org/wiki/Help:Reverting#When_to_revert guidance on "reverting"] from wikimedia.
== Reverting spam ==
Administrators can make use of a [http://en.wikipedia.org/wiki/Wikipedia:Rollback_feature fast rollback facility]. Bring up the article, then click on the History tab. Select the version you wish to rollback to in the first column, and the current version in the second. Click 'compare selected versions'. In the second column will be a 'Rollback' link: click this to rollback. It will also place a comment in the log denoting the rollback.
Reverting when not an administrator is slightly more complicated - see [http://en.wikipedia.org/wiki/Help:Reverting#How_to_revert instructions how to revert].
8fccbc1e46370256f04f3f7e933ccf039053d740
Help:Contents
12
36
185
2009-12-31T17:06:36Z
Polas
1
Created page with 'A few useful links # [[How_To_Edit|How To Edit]]'
wikitext
text/x-wiki
A few useful links
# [[How_To_Edit|How To Edit]]
f89b5cd5a3eb031ece0ce2bd9d7fdd071d57f4eb
Download 0.41 beta
0
37
187
2009-12-31T19:12:33Z
Polas
1
Created page with '== Version 0.41 == Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unav…'
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.)
== Download ==
You can download the Zip file here (1MB.) Full instructions are included on installation for your specific system.
== Installation on POSIX Systems ==
#Install Java RTE from java.sun.com
#Make sure you have a C compiler installed i.e. gcc
#Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
#The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
#Now type make all
#If you have root access, login as root and type make install
#Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
#First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
#Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
#Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
9d253f625de376a8d24535a249fb344ee6d185f9
188
187
2009-12-31T19:13:26Z
Polas
1
/* Installation on POSIX Systems */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.)
== Download ==
You can download the Zip file here (1MB.) Full instructions are included on installation for your specific system.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
#First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
#Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
#Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
270169f128be4a50812b0ab15834bd9b35d285a3
189
188
2009-12-31T19:13:53Z
Polas
1
/* Using Mesham on Windows */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.)
== Download ==
You can download the Zip file here (1MB.) Full instructions are included on installation for your specific system.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
7cbd09d51bbac56b0ca66e9cf68b2d7d391fd47f
190
189
2009-12-31T19:14:18Z
Polas
1
/* Using Mesham on Windows */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.)
== Download ==
You can download the Zip file here (1MB.) Full instructions are included on installation for your specific system.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
d54a41d24461c75038518f5c1cfaca014c599952
191
190
2009-12-31T19:15:43Z
Polas
1
/* Version 0.41 */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download the Zip file here (1MB.) Full instructions are included on installation for your specific system.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
dc420634b8641bbb8ce9789e55d97ded9a181120
192
191
2009-12-31T19:20:10Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
4020906662fb91123a0694cdd4bedd6b555a366c
193
192
2009-12-31T19:24:28Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
9e12b4e23e34cb4e09521c28f55bb7615de1c18c
Functions
0
38
202
2010-01-03T23:37:56Z
Polas
1
Created page with '== Syntax == function returntype name[arguments] == Semantics == In a function all arguments are pass by reference (even constants). If the type of argument is a type chain (r…'
wikitext
text/x-wiki
== Syntax ==
function returntype name[arguments]
== Semantics ==
In a function all arguments are pass by reference (even constants). If the type of argument is a type chain (requires ''::'') then it should be declared in the body
== Example ==
function Int add[var a:Int,var b:Int]
{
return a + b;
};
This function takes two integers and will return their sum.
== The main function ==
Returns void, and like C, it can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name.
66bbe1a85bc5bde61474cfbed4d38a57aa3e9b88
203
202
2010-01-03T23:38:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
function returntype name[arguments]
== Semantics ==
In a function all arguments are pass by reference (even constants). If the type of argument is a type chain (requires ''::'') then it should be declared in the body
== Example ==
function Int add[var a:Int,var b:Int]
{
return a + b;
};
This function takes two integers and will return their sum.
== The main function ==
Returns void, and like C, it can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name.
[[Category:Core Mesham]]
6fe7912314f0d2cab8b1331a180b15b6b0490a05
Par
0
39
210
2010-01-03T23:41:47Z
Polas
1
Created page with '== Syntax == par p from a to b<br> {<br> par body<br> };<br> == Semantics == The parallel equivalent of the for loop, each iteration will execute concurrently on different pro…'
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. All (variable sharing) communication in a par loop is performed using one sided communication, whereas variable sharing SPMD style is performed using synchronous communication for performance reasons.
== Example ==
var p;
par p from 0 to 10
{
print["Hello from process ",p,"\n"];
};
The code fragment will spawn 11 processes (0 to 10 inclusive) and each will display a message.
[[Category:Parallel]]
f52311e6f0f96ace63a2e25068789220a421a32a
Proc
0
40
221
2010-01-03T23:51:00Z
Polas
1
Created page with '== Syntax == proc n where ''n'' is a variable or value == Semantics == This will limit execution of a block to a certain process == Example == proc 0 { print["Hello f…'
wikitext
text/x-wiki
== Syntax ==
proc n
where ''n'' is a variable or value
== Semantics ==
This will limit execution of a block to a certain process
== Example ==
proc 0
{
print["Hello from 0\n"];
};
proc 1
{
print["hello from 1\n"];
};
The code example will run on two processes, the first will display the message 'Hello from 0'', whilst the second will output the message ''hello from 1''.
[[Category:Parallel]]
88133affa8d0495387d3bfbfb799828ac78ceaf1
Sync
0
41
231
2010-01-03T23:52:11Z
Polas
1
Created page with '== Syntax == sync name; == Semantics Will synchronise processes where they are needed. For instance, if using the asynchronous communication type, the programmer can synchron…'
wikitext
text/x-wiki
== Syntax ==
sync name;
== Semantics
Will synchronise processes where they are needed. For instance, if using the asynchronous communication type, the programmer can synchronise with a variable name and the keyword will ensure all communications of that variable are up to date. One sided communication (variable sharing MPMD style in a par loop) is also linked into this keyword and it will ensure all communication is completed. Without a variable will synchronise all outstanding variables that need synchronising. If a process has no variables that need syncing then it will ignore this keyword and continue.
[[Category:Parallel]]
5bc92cb58eb842f3b771876466df185686c03548
232
231
2010-01-03T23:52:25Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
sync name;
== Semantics ==
Will synchronise processes where they are needed. For instance, if using the asynchronous communication type, the programmer can synchronise with a variable name and the keyword will ensure all communications of that variable are up to date. One sided communication (variable sharing MPMD style in a par loop) is also linked into this keyword and it will ensure all communication is completed. Without a variable will synchronise all outstanding variables that need synchronising. If a process has no variables that need syncing then it will ignore this keyword and continue.
[[Category:Parallel]]
5e6595616c98c266b06527b94f88597160dce02b
Skip
0
42
238
2010-01-03T23:53:37Z
Polas
1
Created page with '== Syntax == skip == Semantics == Does nothing! [[Category:Sequential]]'
wikitext
text/x-wiki
== Syntax ==
skip
== Semantics ==
Does nothing!
[[Category:Sequential]]
24e1421b8e8f0cabdbd44773082804086e988cd9
Operators
0
43
241
2010-01-03T23:56:48Z
Polas
1
Created page with '== Operators == #+ #- #* Multiplication #% Division #<< Bit shift to left #>> Bit shift to right #== Test for equality #!= Test for inverse equality #= Test of equa…'
wikitext
text/x-wiki
== Operators ==
#+
#-
#* Multiplication
#% Division
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#= Test of equality on strings
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#|| Logical OR
#&& Logical AND
[[Category:Core Mesham]]
1b3ac169b634707296c867aa6117311d6d20ba30
Category:Element Types
14
44
248
2010-01-03T23:59:28Z
Polas
1
Created page with '[[Category:Type Library]]'
wikitext
text/x-wiki
[[Category:Type Library]]
59080a51ca9983880b93aaf73676382c72785431
Int
0
45
250
2010-01-04T00:02:26Z
Polas
1
Created page with '== Syntax == Int == Semantics == A single whole, 32 bit, number == Example == var i:Int; var b:=12; In this example variable ''i'' is explicitly declared to be of type ''…'
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number
== Example ==
var i:Int;
var b:=12;
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''
[[Category:Element Types]]
[[Category:Type Library]]
3eda7ff0d6778b25e2eec02887efd2344789d9ec
251
250
2010-01-04T00:10:26Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number
== Example ==
var i:Int;
var b:=12;
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
5c9b326e42bd66d125e07e9cf28a307db38fd78a
Template:ElementTypeCommunication
10
46
257
2010-01-04T00:09:41Z
Polas
1
Created page with 'When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecom…'
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[i]
| single[i]
| local assignment where i==i
|-
| single[i]
| single[j]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3
{
if (p==2) b:=p;
a:=b;
};
This code will result in a onesided broadcast (due to being written MPMD style in ''par'' loop) where process 2 will broadcast its value of ''b'' to all other processes who will write it into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''.
bab919d2a0f51874f022d0e1e363309d7428f355
258
257
2010-01-04T00:20:12Z
Polas
1
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3
{
if (p==2) b:=p;
a:=b;
};
This code will result in a onesided broadcast (due to being written MPMD style in ''par'' loop) where process 2 will broadcast its value of ''b'' to all other processes who will write it into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''.
b8294772de4a61850aaccc278e3c16fe45740bbf
Float
0
47
263
2010-01-04T00:17:03Z
Polas
1
Created page with '== Syntax == Float == Semantics == A floating point number of size 4 bytes == Example == var i:Float; In this example variable ''i'' is explicitly declared to be of type '…'
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A floating point number of size 4 bytes
== Example ==
var i:Float;
In this example variable ''i'' is explicitly declared to be of type ''Float''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
bca8e818c4890c1dc77911579c07e4059c66941b
Double
0
48
269
2010-01-04T00:17:54Z
Polas
1
Created page with '== Syntax == Double == Semantics == A double precision floating point number of size 8 bytes == Example == var i:Double; In this example variable ''i'' is explicitly decla…'
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision floating point number of size 8 bytes
== Example ==
var i:Double;
In this example variable ''i'' is explicitly declared to be of type ''Double''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
ea756d333ed41dcf1d036f47c6c2e3755f9cd436
Bool
0
49
275
2010-01-04T00:19:06Z
Polas
1
Created page with '== Syntax == Bool == Semantics == A true or false value == Example == var i:Bool; var x:=true; In this example variable ''i'' is explicitly declared to be of type ''Bool'…'
wikitext
text/x-wiki
== Syntax ==
Bool
== Semantics ==
A true or false value
== Example ==
var i:Bool;
var x:=true;
In this example variable ''i'' is explicitly declared to be of type ''Bool''. Variable ''x'' is declared to be of value ''true'' which via type inference results in its type also becomming ''Bool''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
2188c5513c1eb7a86fe0797abd2c9eb1bd6f889d
Char
0
50
280
2010-01-04T00:21:37Z
Polas
1
Created page with '== Syntax == Char == Semantics == An alphanumeric ASCII character of size 1 byte == Example == var i:Char; var r:='a'; In this example variable ''i'' is explicitly declar…'
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An alphanumeric ASCII character of size 1 byte
== Example ==
var i:Char;
var r:='a';
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
550549927d59bce4b4dff938e3981f6f298a6f5c
String
0
51
286
2010-01-04T00:22:50Z
Polas
1
Created page with '== Syntax == String == Semantics == A string of characters == Example == var i:String; var p:="Hello World!"; In this example variable ''i'' is explicitly declared to be …'
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters
== Example ==
var i:String;
var p:="Hello World!";
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
4ca52653fc0a3c7bd9f8a4ee80224105e1816739
File
0
52
292
2010-01-04T00:23:38Z
Polas
1
Created page with '== Syntax == File == Semantics == A file handle with which the programmer can use to reference open files on the file system == Example == var i:File; In this example vari…'
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
== Example ==
var i:File;
In this example variable ''i'' is explicitly declared to be of type ''File''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
5586b88efb896d31a09a553f07c892e6e4c18da4
Long
0
53
298
2010-01-04T00:25:23Z
Polas
1
Created page with '== Syntax == Long == Semantics == A long 64 bit number. == Example == var i:Long; In this example variable ''i'' is explicitly declared to be of type ''Long''. == Communi…'
wikitext
text/x-wiki
== Syntax ==
Long
== Semantics ==
A long 64 bit number.
== Example ==
var i:Long;
In this example variable ''i'' is explicitly declared to be of type ''Long''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
8442c0c0b4f8b26e5d41dec7a0fdefc3157619c8
Category:Attribute Types
14
54
303
2010-01-04T00:28:18Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
Category:Allocation Types
14
55
306
2010-01-04T00:29:13Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
Category:Collection Types
14
56
309
2010-01-04T00:29:32Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
Category:Primitive Communication Types
14
57
312
2010-01-04T00:30:07Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
313
312
2010-01-10T19:21:32Z
Polas
1
wikitext
text/x-wiki
Primitive communication types ensure that all, safe, forms of communication supported by MPI can also be represented in Mesham. However, unlike the shared variable approach adopted elsewhere, when using primitive communication the programmer is responsible for ensuring communications complete and match up.
[[Category:Composite Types]]
06c325e88ccf2273c339ab82e056a1ea274bf4df
Category:Communication Mode Types
14
58
316
2010-01-04T00:30:30Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
317
316
2010-01-10T19:37:25Z
Polas
1
wikitext
text/x-wiki
By default, communication in Mesham is blocking (i.e. will not continue until a send or receive has completed.) Standard sends will complete either when the message has been sent to the target processor or when it has been copied into a buffer, on the source machine, ready for sending. In most situations the standard send is the most efficient, however in some specialist situations more performance can be gained by overriding this.
By providing these communication mode types illustrates a powerful aspect of type based parallelism. The programmer can use the default communication method initially and then, to fine tune their code, simply add extra types to experiment with the performance of these different communication options.
[[Category:Composite Types]]
4d9c05e688d0af24272c665d99b93ccda5c5cea9
Category:Partition Types
14
59
320
2010-01-04T00:30:56Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
321
320
2010-01-10T21:07:14Z
Polas
1
wikitext
text/x-wiki
Often in data parallel HPC applications the programmer wishes to split up data in some way, shape or form. This is often a difficult task, as the programmer must consider issues such as synchronisation and uneven distributions. Mesham provides types to allow for the partitioning and distribution of data, the programmer needs just to specify the correct type and then behind the scenes the compiler will deal with all the complexity via the type system. It has been found that this approach works well, not just because it simplifies the program, but also because some of the (reusable) codes associated with parallelization types are designed beforehand by expert system programmers. These types tend be better optimized by experts than the codes written directly by the end programmers.
When the programmer partitions data, the compiler splits it up into blocks (an internal type of the compiler.) The location of these blocks depends on the distribution type used - it is possible for all the blocks to be located on one process, on a few or on all and if there are more blocks than processes they can always ``wrap around.'' The whole idea is that the programmer can refer to separate blocks without needing to worry about exactly where they are located, this means that it's very easy to change the distribution method to something more efficient later down the line if required.
The programmer can think of two types of partitioning - partitioning for distribution and partitioning for viewing. The partition type located inside the allocated type is the partition for distribution (and also the default view of the data.) However, if the programmer wishes to change the way they are viewing the blocks of data, then a different partition type can be coerced. This will modify the view of the data, but NOT the underlying way that the data is allocated and distributed amongst the processes. Of course, it is important to avoid an ambiguous combination of partition types. In order to access a certain block of a partition, simply use array access # or [ ] i.e. ''a#3'' will access the 3rd block of variable a.
In the code ''var a:array[Int,10,20] :: allocated[A[m] :: single[D[]]]'', the variable ''a'' is declared to be a 2d array size 10 by 20, using partition type A and splitting the data into ''m'' blocks. These blocks are distributed amongst the processes via distribution method ''D''.
In the code fragment ''a:(a::B[])'', the partition type ''B'' is coerced with the type of variable ''a'', and the view of the data changes from that of ''A'' to 'B''.
[[Category:Composite Types]]
5165090cda15f8cf7da4789990bd69613a177b67
Category:Distribution Types
14
60
324
2010-01-04T00:31:21Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
Category:Composition Types
14
61
327
2010-01-04T00:32:09Z
Polas
1
Created page with '[[Category:Composite Types]]'
wikitext
text/x-wiki
[[Category:Composite Types]]
092f27d13762b5eaf749cb38ac20e7086632357c
Allocated
0
62
330
2010-01-04T00:34:50Z
Polas
1
Created page with '== Syntax == allocated[{type}] == Semantics == This type sets the memory allocation of a variable, which may not be modified once set. == Example == var i: Int :: allocated…'
wikitext
text/x-wiki
== Syntax ==
allocated[{type}]
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
var i: Int :: allocated[];
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
659245a719e08dc5a5c2ac844fa01948edc86588
331
330
2010-01-04T00:35:19Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
allocated[type]
Where ''type'' is optional
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
var i: Int :: allocated[];
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
4aeae0b3fc48e8e62ee230e2d2ff794058c5eea7
Multiple
0
63
336
2010-01-04T00:37:07Z
Polas
1
Created page with '== Syntax == multiple[type] Where ''type'' is optional == Semantics == Included in allocated will (with no arguments) set the specific variable to have memory allocated to al…'
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope.
== Example ==
var i: Int :: allocated[multiple[]];
In this example the variable ''i'' is an integer, allocated to all processes.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
637ad475b633d41136755bcbddd04f9c7c4670f5
Commgroup
0
64
342
2010-01-04T00:38:13Z
Polas
1
Created page with '== Syntax == commgroup[process list] == Semantics == Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the …'
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments.
== Example ==
var i:Int :: allocated[multiple[commgroup[1,2]]];
In this example there are a number processes, but only 1 and 2 have variable ''i'' allocated to them.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
8e1310320f25fa0537a66159933d5f534f310c91
Single
0
65
349
2010-01-04T00:39:52Z
Polas
1
Created page with '== Syntax == single[type] single[on[process]] where ''type'' is optional == Semantics == Will allocate a variable to a specific process. Most commonly combined with the ''on''…'
wikitext
text/x-wiki
== Syntax ==
single[type]
single[on[process]]
where ''type'' is optional
== Semantics ==
Will allocate a variable to a specific process. Most commonly combined with the ''on'' type which specifies the process to allocated to, but not required if this can be inferred. Additionally the programmer will place a distribution type within ''single'' if dealing with distributed arrays.
== Example ==
var i:Int :: allocated[single[on[1]]];
In this example variable ''i'' is declared as an integer and allocated on process 1.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
589ff7891ed7a794a15b2bb5dc23e75064d44639
Const
0
66
354
2010-01-04T00:41:35Z
Polas
1
Created page with '== Syntax == const[ ] == Semantics == Enforces the read only property of a variable. == Example == var a:Int; a:=34; a:(a :: const[]); a:=33; The code in the above exam…'
wikitext
text/x-wiki
== Syntax ==
const[ ]
== Semantics ==
Enforces the read only property of a variable.
== Example ==
var a:Int;
a:=34;
a:(a :: const[]);
a:=33;
The code in the above example will produce an error. Whilst the first assignment (''a:=34'') is legal, on the subsequent line the programmer has modified the type of ''a'' to be that of ''a'' combined with the type ''const''. The second assignment is attempting the modify a now read only variable and will fail.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
4101185dcb23f2061be63061cc961b07f1a0a9c2
Tempmem
0
67
359
2010-01-04T00:42:17Z
Polas
1
Created page with '== Syntax == tempmem[ ] == Semantics == Used to inform the compiler that the programmer is happy that a call (usually communication) will use temporary memory. Some calls can …'
wikitext
text/x-wiki
== Syntax ==
tempmem[ ]
== Semantics ==
Used to inform the compiler that the programmer is happy that a call (usually communication) will use temporary memory. Some calls can not function without this and will give an error, others will work more efficiently with temporary memory but can operate without at a performance cost. This type is provided because often memory is at a premium, with applications running towards at their limit. It is therefore useful for the programmer to indicate whether or not using extra, temporary, memory is allowed.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
b553001cc437f906ff492e662198d31813a0fb55
Share
0
68
363
2010-01-04T00:44:20Z
Polas
1
Created page with '== Syntax == share[name] == Semantics == This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the…'
wikitext
text/x-wiki
== Syntax ==
share[name]
== Semantics ==
This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the memory of that specified as arguments to the type.) This is very useful in HPC applications as often processes are running at the limit of their resources. The type will share memory with that of the variable ''name'' in the above syntax. In order to keep this type safe, the sharee must be smaller than or of equal size to the memory chunk, this is error checked.
== Example ==
var a:Int::allocated[multiple[]];
var c:Int::allocated[multiple[] :: share[a]];
var e:array[Int,10]::allocated[single[on[1]]];
var u:array[Char,12]::allocated[single[on[1]] :: share[e]];
In the example above, the variables ''a'' and ''c'' will share the same memory. The variables ''e'' and ''u'' will also share the same memory. There is some potential concern that this might result in an error - as the size of ''u'' array is 12, and size of ''e'' array is only 10. If the two arrays have different types then this size will be checked dynamically - as an int is 32 bit and a char only 8 then this sharing of data would work in this case.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
d40fc6dda4f82a89ad803119e22e03e159735832
Extern
0
69
368
2010-01-04T00:45:35Z
Polas
1
Created page with '== Syntax == extern[location] Where ''location'' is optional == Semantics == Provided as additional allocation type information, this tells the compiler NOT to allocate memor…'
wikitext
text/x-wiki
== Syntax ==
extern[location]
Where ''location'' is optional
== Semantics ==
Provided as additional allocation type information, this tells the compiler NOT to allocate memory for the variable as this has been already done externally. The ''location'' argument is optional and just tells the compiler where the variable is to be found (e.g. a C header file) if required.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
274a2e6b85de490dde0da5680c72508c73f40a20
Directref
0
70
373
2010-01-04T00:47:00Z
Polas
1
Created page with '== Syntax == directref[ ] == Semantics == This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to per…'
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
var pid:Int :: allocated[multiple[]] :: directref[];
ccode["pid=(int) getpid();","","#include <sys/types.h>","#include <unistd.h>"];
print["My Process ID is ",pid,"\n"];
The code example above illustrates how the Mesham programmer can easily include native C code in their program, using normal program variables. First the variable ''pid'' is declared to be an integer, allocated to all processes and that it will be referenced directly by native C. The ''ccode'' function then allows the programmer to code directly in C and uses the POSIX function ''getpid'' to obtain the process ID of the current program, which is cast as an integer and stored directly in variable ''pid''. The last line, once again Mesham code, will display this process ID.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
a64106d9f367903fbf77a6b213e8492f88e497ad
Array
0
71
379
2010-01-10T19:15:20Z
Polas
1
Created page with '-- Syntax -- array[type,d1$,d2$,...,dn] -- Semantics -- An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dime…'
wikitext
text/x-wiki
-- Syntax --
array[type,d1$,d2$,...,dn]
-- Semantics --
An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer can either use the traditional ''name[index]'' syntax or, alternatively ''name#index'' which is preferred
-- Communication --
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
-- Example --
var a:array[String,2] :: allocated[multiple[]];
(a#0):="Hello";
(a#1):="World";
print[(a#0)," ",(a#1),"\n"];
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
85ec9c663e0bf46f5fde80a633272aa8bc85d729
380
379
2010-01-10T19:15:49Z
Polas
1
wikitext
text/x-wiki
--- Syntax ---
array[type,d1$,d2$,...,dn]
--- Semantics ---
An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer can either use the traditional ''name[index]'' syntax or, alternatively ''name#index'' which is preferred
--- Communication ---
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
--- Example ---
var a:array[String,2] :: allocated[multiple[]];
(a#0):="Hello";
(a#1):="World";
print[(a#0)," ",(a#1),"\n"];
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
5c4fa50d0e597fc3722c3e3e2a2aded44d94ae5d
381
380
2010-01-10T19:16:11Z
Polas
1
wikitext
text/x-wiki
---- Syntax ----
array[type,d1$,d2$,...,dn]
---- Semantics ----
An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer can either use the traditional ''name[index]'' syntax or, alternatively ''name#index'' which is preferred
---- Communication ----
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
---- Example ----
var a:array[String,2] :: allocated[multiple[]];
(a#0):="Hello";
(a#1):="World";
print[(a#0)," ",(a#1),"\n"];
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
9362a316bd40a159599ec68e4f5a52b1e986a34c
382
381
2010-01-10T19:17:05Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d1$,d2$,...,dn]
== Semantics ==
An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer can either use the traditional ''name[index]'' syntax or, alternatively ''name#index'' which is preferred
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
var a:array[String,2] :: allocated[multiple[]];
(a#0):="Hello";
(a#1):="World";
print[(a#0)," ",(a#1),"\n"];
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
0215c03605054d21e58259b276cc90b2d025e087
Row
0
72
391
2010-01-10T19:18:15Z
Polas
1
Created page with ' == Syntax == row[ ] == Semantics == In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provid…'
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type.
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
((a#1)#2):=23;
(((a :: row[])#1)#2):=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
741dbeafb684c62addb4fb5e6095ba605535ce94
Col
0
73
397
2010-01-10T19:19:09Z
Polas
1
Created page with ' == Syntax == col[ ] == Semantics == In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provid…'
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type.
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
((a#1)#2):=23;
(((a :: row[])#1)#2):=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
b81af5da9d309a40ae631dcbf63abf6024083358
Channel
0
74
403
2010-01-10T19:23:23Z
Polas
1
Created page with ' == Syntax == channel[a,b] Where ''a'' and ''b'' are both distinct processes which the channel will connect. == Semantics == The ''channel'' type will specify that a variable…'
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2
{
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
cf80326127e017dba513cd7290fac55d5ccd68bf
Pipe
0
75
409
2010-01-10T19:25:17Z
Polas
1
Created page with '== Syntax == pipe[a,b] == Semantics == Identical to the [[Channel]] type, except pipe is bidirectional rather than unidirectional [[Category:Type Library]] [[Category:Composi…'
wikitext
text/x-wiki
== Syntax ==
pipe[a,b]
== Semantics ==
Identical to the [[Channel]] type, except pipe is bidirectional rather than unidirectional
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
225ee2b5782414bf8a5e7fccc110d04973beaa23
Onesided
0
76
413
2010-01-10T19:26:02Z
Polas
1
Created page with '== Syntax == onesided[a,b] == Semantics == Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less effic…'
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
adba2fe064d7770caffa084b2f8ff63bec470a55
Reduce
0
77
420
2010-01-10T19:27:39Z
Polas
1
Created page with '== Syntax == reduce[root,operation] == Semantics == All processes in the group will combine their values together at the root process and then the operation will be performed …'
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them. Numerous operations are supported, such as sum, min, max, multiply.
== Example ==
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
bc64cc52299e91b458a5583f4e680cc829a87996
Broadcast
0
78
427
2010-01-10T19:29:33Z
Polas
1
Created page with '== Syntax == broadcast[root] == Semantics == This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must eith…'
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(a::broadcast[2]):=23;
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
a456116021514f6023a584284751abad2d56e402
Gather
0
79
433
2010-01-10T19:30:58Z
Polas
1
Created page with '== Syntax == gather[elements,root] == Semantics == Gather a number of elements (equal to ''elements'') from each process and send these to the root process. == Example == …'
wikitext
text/x-wiki
== Syntax ==
gather[elements,root]
== Semantics ==
Gather a number of elements (equal to ''elements'') from each process and send these to the root process.
== Example ==
var x:array[Int,12] :: allocated[single[on[2]]];
var r:array[Int,3] :: allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::gather[3,2]):=r;
};
In this example, the variable ''x'' is allocated on the root process (2) only. Whereas ''r'' is allocated on all processes. In the assignment all three elements of ''r'' are gathered from each process and sent to the root process (2) and then placed into variable ''x'' in the order defined by the source's PID.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
0b9ea1577cab22178452539292687f6d5c89dca8
Scatter
0
80
438
2010-01-10T19:32:29Z
Polas
1
Created page with '== Syntax == scatter[elements,root] == Semantics == Will send a number of elements (equal to ''elements'') from the root process to all other processes. == Example == var …'
wikitext
text/x-wiki
== Syntax ==
scatter[elements,root]
== Semantics ==
Will send a number of elements (equal to ''elements'') from the root process to all other processes.
== Example ==
var x:array[Int,3]::allocated[multiple[]];
var r:array[Int,12]::allocated[multiple[]];
var p;
par p from 0 to 3
{
x:(x::scatter[3,1]);
x:=r;
};
In this example, three elements of array ''r'', on process 1, are scattered to each other process and placed in their copy of ''r''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
8dd83f1bad23e705e85d4a9cd922ae8b91b47f8e
Alltoall
0
81
443
2010-01-10T19:35:20Z
Polas
1
Created page with '== Syntax == alltoall[elementsoneach] == Semantics == Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in th…'
wikitext
text/x-wiki
== Syntax ==
alltoall[elementsoneach]
== Semantics ==
Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in the group.
== Example ==
x:array[Int,12]::allocated[multiple[]];
var r:array[Int,3]::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x:alltoall[3]):=r;
};
In this example each process sends every other process three elements (the elements in its ''r''.) Therefore each process ends up with twelve elements in ''x'', the location of each is based on the source processes's PID.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
8407a9d4c7e13f3924a7bcd804147743f5c6ed76
Allreduce
0
82
448
2010-01-10T19:36:44Z
Polas
1
Created page with '== Syntax == allreduce[operation] == Semantics == Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all. …'
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::allreduce["min"]):=p;
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
174da5027ceea7d7d3e37bf876ff0897ada69c4e
Async
0
83
455
2010-01-10T19:40:11Z
Polas
1
Created page with '== Syntax == async[ ] == Semantics == This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often ver…'
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2
{
a:=89;
var q:=20;
q:=a;
sync q;
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
b396d8d746eea6f889b0faa5c65cd994ebde771f
Blocking
0
84
461
2010-01-10T19:41:52Z
Polas
1
Created page with '== Syntax == blocking[ ] == Semantics == Will force P2P communication to be blocking, which is the default setting == Example == var a:Int::allocated[single[on[1]]]; var b…'
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
c5472320c7e5ff17dae6f4758dead23c0363a4e3
Nonblocking
0
85
467
2010-01-10T19:43:04Z
Polas
1
Created page with '== Syntax == nonblocking[ ] == Semantics == This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having …'
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
bd2db68d88d8f5d525108defe8407ad3bbbf2f03
Standard
0
86
473
2010-01-10T19:44:01Z
Polas
1
Created page with '== Syntax == standard[ ] == Semantics == This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or …'
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
ee876b0d1384d66229db0dcc8186b91acd6316f5
Buffered
0
87
479
2010-01-10T19:45:25Z
Polas
1
Created page with '== Syntax == buffered[buffersize] == Semantics == This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of s…'
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
0cfee5b9c0cd9e2a72096075e1ab50bdc6203a88
Ready
0
88
486
2010-01-10T19:46:44Z
Polas
1
Created page with ' == Syntax == ready[ ] == Semantics == The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunc…'
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
d51c6d2c09db2e7ea39973cddc52dff152fe0b75
487
486
2010-01-10T19:49:24Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
cc491e1b61a01512e0fd291d81d05f7537e384c1
Synchronous
0
89
493
2010-01-10T19:49:04Z
Polas
1
Created page with '== Syntax == synchronous[] == Semantics == By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target …'
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
ecf791fa6a4d2718604a3b91924019ec88b78fd8
Horizontal
0
90
499
2010-01-10T21:11:56Z
Polas
1
Created page with '== Syntax == horizontal[ blocks ] Where ''blocks'' is number of blocks to partition into. == Semantics == This type will split up data horizontally into a number of blocks. I…'
wikitext
text/x-wiki
== Syntax ==
horizontal[ blocks ]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size.
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
6ac9d085c1d46dd31bd55fbf29d88b20be865716
500
499
2010-01-10T21:12:15Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size.
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
4c9c4d7c4815a7adedcc1d0064105a93e120b6c0
501
500
2010-01-10T21:16:50Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
[[Image:horiz.jpg]]
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
27b65b3c2caab947ebe71c0266f3e27f7e75bd48
502
501
2010-01-10T21:17:33Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
5d6c096c3913a4d4ae5945d68a7a49597338c2f9
503
502
2010-01-10T21:18:43Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
2de1dc7458ccca1ace42f499064a3d868f8b41b3
Vertical
0
91
511
2010-01-10T21:13:21Z
Polas
1
Created page with '== Syntax == vertical[blocks] == Semantics == Same as the [[horizontal]] type but will partition the array vertically [[Category:Type Library]] [[Category:Composite Types]] …'
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
b7b4b015947f25f1028eacbb755c0cd5d935f93c
512
511
2010-01-10T21:19:45Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically. The figure below illustrates partitioning an array into 4 blocks vertically.
<center>[[Image:vert.jpg|Vertical Partition of an array into four blocks via type oriented programming]]</center>
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Partition Types]]
cba04ad4dfbcd1e8e7488b7352127dec4abbe6d3
File:Horiz.jpg
6
92
517
2010-01-10T21:14:54Z
Polas
1
Horizontal partitioning of an array via the horizontal type
wikitext
text/x-wiki
Horizontal partitioning of an array via the horizontal type
574c772bfc90f590db956c081c201e3ab506c94b
File:Vert.jpg
6
93
519
2010-01-10T21:15:22Z
Polas
1
Vertical partitioning of an array via the vertical type
wikitext
text/x-wiki
Vertical partitioning of an array via the vertical type
bf828b129f970f21341fb2357d36f32a993c68be
File:Evendist.jpg
6
94
521
2010-01-10T21:25:48Z
Polas
1
Even distribution of 10 blocks over 4 processors
wikitext
text/x-wiki
Even distribution of 10 blocks over 4 processors
1831c950976897aab248fe6058609023f0edb3bd
Evendist
0
95
523
2010-01-10T21:26:29Z
Polas
1
Created page with '== Syntax == evendist[] == Semantics == Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are…'
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3
{
var q:=(((b#p)#2)#3);
var r:=(((a#p)#2)#3);
var s:=((((b :: horizontal[])#p)#2)#3);
};
a:=e;
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Distribution Types]]
9ad82d216f86b3697c130955245ab3c1f3d93b5f
Record
0
96
529
2010-01-10T21:29:46Z
Polas
1
Created page with '== Syntax == record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>] == Semantics == The ''record'' type allows the…'
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
== Example ==
var complex : record["r",Float,"i",Float];
var person: record["name",String, "age",Int, "gender",Char];
var a:array[complex,10];
(a#1).i:=22.3;
var b:complex;
var me:person;
me.name:="nick";
In the above example, ''complex'' (a complex number) is a record with two [[float]] elements, ''i'' and ''r''. The variable ''b'' is defined as a complex number and ''a'' as an array of these numbers. The variable ''me'' is of type ''person''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Composition Types]]
b4504acb2a12a3b7e168b4fc879799d6cd4aaede
Array
0
71
383
382
2010-01-10T21:30:27Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer can either use the traditional ''name[index]'' syntax or, alternatively ''name#index'' which is preferred
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| local assignment on process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
var a:array[String,2] :: allocated[multiple[]];
(a#0):="Hello";
(a#1):="World";
print[(a#0)," ",(a#1),"\n"];
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
2e9824cf998c42368249c659c847c7725921d68d
Referencerecord
0
97
537
2010-01-10T21:34:47Z
Polas
1
Created page with '== Syntax == referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>] == Semantics == The [[record]] type ma…'
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
== Example ==
var node:referencerecord["prev",node,"Int",data,"next",node];
var head:node;
head:=null;
var i;
for i from 0 to 9
{
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null)
{
print[head.data,"\n"];
head:=head.next;
};
In this code example a doubly linked list is created, and then its contents read node by node.
533512e44bbefdb205cdd74593d01c4d3f8ba782
538
537
2010-01-10T21:35:03Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
== Example ==
var node:referencerecord["prev",node,"Int",data,"next",node];
var head:node;
head:=null;
var i;
for i from 0 to 9
{
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null)
{
print[head.data,"\n"];
head:=head.next;
};
In this code example a doubly linked list is created, and then its contents read node by node.
c9fa91ae1b9a7d0e8353739ae5fda72fb0373442
Template:Documentation
10
14
85
84
2010-01-10T21:36:57Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Procedures]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Composite Types|Composite Types]]
*[[:Category:Function Library|Function Library]]
40bc09515c731b65908bf1977de4400f008d2f58
86
85
2010-01-10T21:37:17Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types|Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Procedures]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Composite Types|Composite Types]]
*[[:Category:Function Library|Function Library]]
99958cc8119581d286845a23e4fee052aa96be40
Category:Types
14
98
544
2010-01-10T21:41:51Z
Polas
1
Created page with 'A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''var…'
wikitext
text/x-wiki
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents a variable name and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
Compound types are also listed in the type library, to give the reader a flavour they may contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
| extended types
[[Category:Core Mesham]]
932f87d73254ff422935702ac6894cc77afdc552
545
544
2010-01-10T21:49:06Z
Polas
1
wikitext
text/x-wiki
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents a variable name and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
Compound types are also listed in the type library, to give the reader a flavour they may contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
| extended types
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ==
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
var i:Int :: allocated[multiple[]];
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
var m:String;
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
=== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information may not be changed.
=== Examples ===
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's type can be coerced with additional types just for that expression.
=== Example ===
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
6aa14d8a10f606eabf0f1596fffb0cac725ebb39
546
545
2010-01-10T21:50:00Z
Polas
1
wikitext
text/x-wiki
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents a variable name and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
Compound types are also listed in the type library, to give the reader a flavour they may contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
| extended types
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
var i:Int :: allocated[multiple[]];
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
var m:String;
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information may not be changed.
=== Examples ===
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's type can be coerced with additional types just for that expression.
=== Example ===
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
9664319f3750d8cdc2e43ca4a35ce98ca70ff569
Currenttype
0
99
552
2010-01-10T21:51:27Z
Polas
1
Created page with '== Syntax == currentype varname; == Semantics == Will return the current type of the variable. == Example == var i: Int; var q:currentype i; Will declare ''q'' to be an…'
wikitext
text/x-wiki
== Syntax ==
currentype varname;
== Semantics ==
Will return the current type of the variable.
== Example ==
var i: Int;
var q:currentype i;
Will declare ''q'' to be an integer the same type as ''i''.
[[Category:Sequential]]
[[Category:Types]]
9eca2f2fce0608c8d00455389f56856c303c0bee
Declaredtype
0
100
557
2010-01-10T21:52:36Z
Polas
1
Created page with '== Syntax == declaredtype varname; == Semantics == Will return the declared type of the variable. == Example == var i:Int; i:i::const[]; i:declaredtype i; [[Category:…'
wikitext
text/x-wiki
== Syntax ==
declaredtype varname;
== Semantics ==
Will return the declared type of the variable.
== Example ==
var i:Int;
i:i::const[];
i:declaredtype i;
[[Category:Sequential]]
[[Category:Types]]
13bae3e6e09c5ce954e74479ed861cd60eca757d
558
557
2010-01-10T22:05:47Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
declaredtype varname;
== Semantics ==
Will return the declared type of the variable.
== Example ==
var i:Int;
i:i::const[];
i:declaredtype i;
This code example will firstly type ''i'' to be an [[Int]]. On line 2, the type of ''i'' is combined with the type [[const]] (enforcing read only access to the variable's data.) On line 3, the programmer is reverting the variable back to its declared type (i.e. so one can write to the data.)
[[Category:Sequential]]
[[Category:Types]]
5f1c1467c55ae37427e094707fbe75fdc89dbe4c
Type Variables
0
101
563
2010-01-10T21:55:57Z
Polas
1
Created page with '== Syntax == typevar name::=type; name::=type; Note how ''::='' is used rather than '':='' ''typevar'' is the type equivalent of ''var'' == Semantics == Type variables allow …'
wikitext
text/x-wiki
== Syntax ==
typevar name::=type;
name::=type;
Note how ''::='' is used rather than '':=''
''typevar'' is the type equivalent of ''var''
== Semantics ==
Type variables allow the programmer to assign types and type combinations to variables for use as normal program variables. These exist only in compilation and are not present in the runtime semantics.
== Example ==
typevar m::=Int :: allocated[multiple[]];
var f:m;
typevar q::=declaredtype f;
q::=m;
In the above code example, the type variable ''m'' has the type value ''Int :: allocated[multiple[]]'' assigned to it. On line 2, a new (program) variable is created using this new type variable. In line 3, the type variable ''q'' is declared and has the value of the declared type of program variable ''f''. Lastly in line 4, type variable ''q'' changes its value to become that of type variable ''m''. Although type variables can be thought of as the programmer creating new types, they can also be used like program variables in cases such as equality tests and assignment.
[[Category:Types]]
d22ac1f7e091e967677ae2c20cb2c21e09356551
Category:Type Library
14
102
568
2010-01-10T21:58:33Z
Polas
1
Created page with 'a'
wikitext
text/x-wiki
a
86f7e437faa5a7fce15d1ddcb9eaeaea377667b8
569
568
2010-01-10T21:58:52Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Maths Functions
14
103
571
2010-01-10T22:00:53Z
Polas
1
Created page with '[[Category:Function Library]]'
wikitext
text/x-wiki
[[Category:Function Library]]
9b2afd124884cd3c3dd6d9531c7d12900cea7e98
Category:IO Functions
14
104
574
2010-01-10T22:01:16Z
Polas
1
Created page with '[[Category:Function Library]]'
wikitext
text/x-wiki
[[Category:Function Library]]
9b2afd124884cd3c3dd6d9531c7d12900cea7e98
Category:Parallel Functions
14
105
577
2010-01-10T22:01:34Z
Polas
1
Created page with '[[Category:Function Library]]'
wikitext
text/x-wiki
[[Category:Function Library]]
9b2afd124884cd3c3dd6d9531c7d12900cea7e98
Category:String Functions
14
106
580
2010-01-10T22:02:31Z
Polas
1
Created page with '[[Category:Function Library]]'
wikitext
text/x-wiki
[[Category:Function Library]]
9b2afd124884cd3c3dd6d9531c7d12900cea7e98
Category:System Functions
14
107
583
2010-01-10T22:02:49Z
Polas
1
Created page with '[[Category:Function Library]]'
wikitext
text/x-wiki
[[Category:Function Library]]
9b2afd124884cd3c3dd6d9531c7d12900cea7e98
Cos
0
108
586
2010-01-10T22:04:26Z
Polas
1
Created page with '== Overview == This cos[n] function will find the cosine of the value or variable ''n'' passed to it. '''Pass''' A double to find cosine of '''Returns''' A double representing …'
wikitext
text/x-wiki
== Overview ==
This cos[n] function will find the cosine of the value or variable ''n'' passed to it.
'''Pass''' A double to find cosine of
'''Returns''' A double representing the cosine
== Example ==
var a:=cos[10];
var y;
y:=cos[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
0f7ee39cd215c1aa748834b9228f45404bcf4b2f
587
586
2010-01-10T22:06:38Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
This cos[n] function will find the cosine of the value or variable ''n'' passed to it.
* '''Pass:''' A double to find cosine of
* '''Returns:''' A double representing the cosine
== Example ==
var a:=cos[10];
var y;
y:=cos[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
394bb5b8a610ea3013e78984d741679e8a78ecdd
Floor
0
109
595
2010-01-10T22:07:46Z
Polas
1
Created page with '== Overview == This floor[n] function will find the largest integer less than or equal to ''n''. * '''Pass:''' A double to find floor of * '''Returns:''' An integer representing…'
wikitext
text/x-wiki
== Overview ==
This floor[n] function will find the largest integer less than or equal to ''n''.
* '''Pass:''' A double to find floor of
* '''Returns:''' An integer representing the floor
== Example ==
var a:=floor[10.5];
var y;
y:=floor[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
1ff54de5996ebd274c792658abf2517893932004
Getprime
0
110
600
2010-01-10T22:08:55Z
Polas
1
Created page with '== Overview == This getprime[n] function will find the ''n''th prime number. * '''Pass:''' An integer * '''Returns:''' An integer representing the prime == Example == var a:=…'
wikitext
text/x-wiki
== Overview ==
This getprime[n] function will find the ''n''th prime number.
* '''Pass:''' An integer
* '''Returns:''' An integer representing the prime
== Example ==
var a:=getprime[10];
var y;
y:=getprime[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
dd43781deef7a8edd960e0ec16e41d051e6d258d
Log
0
111
605
2010-01-10T22:10:38Z
Polas
1
Created page with '== Overview == This log[n] function will find the logarithmic value of ''n'' * '''Pass:''' A double * '''Returns:''' A double representing the logarithmic value\twolines{} == …'
wikitext
text/x-wiki
== Overview ==
This log[n] function will find the logarithmic value of ''n''
* '''Pass:''' A double
* '''Returns:''' A double representing the logarithmic value\twolines{}
== Example ==
var a:=log[10];
var y;
y:=log[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
84eede6de5cc5649d8052c36b6df426a5a49a990
606
605
2010-01-10T22:10:55Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
This log[n] function will find the logarithmic value of ''n''
* '''Pass:''' A double
* '''Returns:''' A double representing the logarithmic value
== Example ==
var a:=log[10];
var y;
y:=log[a];
[[Category:Function Library]]
[[Category:Maths Functions]]
ff0b2593657a0c5305f019a69f74cf5643930a86
Mod
0
112
612
2010-01-10T22:11:58Z
Polas
1
Created page with '== Overview == This mod[n,x] function will divide ''n'' by ''x'' and return the remainder. * '''Pass:''' Two integers * '''Returns:''' An integer representing the remainder ==…'
wikitext
text/x-wiki
== Overview ==
This mod[n,x] function will divide ''n'' by ''x'' and return the remainder.
* '''Pass:''' Two integers
* '''Returns:''' An integer representing the remainder
== Example ==
var a:=mod[7,2];
var y;
y:=mod[a,a];
[[Category:Function Library]]
[[Category:Maths Functions]]
4750a94ef669d04a73d15a61381fb0efdb4a8506
PI
0
113
617
2010-01-10T22:15:10Z
Polas
1
Created page with '== Overview == This pi[] function will return PI. ''Note: The number of significant figures of PI is implementation specific.'' * '''Pass:''' None * '''Returns:''' A double rep…'
wikitext
text/x-wiki
== Overview ==
This pi[] function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A double representing PI
== Example ==
var a:=pi[];
[[Category:Function Library]]
[[Category:Maths Functions]]
fc68610fb0e11e03425bf70cb3081941142dbd33
Pow
0
114
623
2010-01-10T22:16:38Z
Polas
1
Created page with '== Overview == This pow[n,x] function will return ''n'' to the power of ''x''. * '''Pass:''' Two integers * '''Returns:''' A double representing the result == Example == var…'
wikitext
text/x-wiki
== Overview ==
This pow[n,x] function will return ''n'' to the power of ''x''.
* '''Pass:''' Two integers
* '''Returns:''' A double representing the result
== Example ==
var a:=pow[2,8];
[[Category:Function Library]]
[[Category:Maths Functions]]
b4bdba87e7d8bb2097b17e9193db0a690655e02a
Randomnumber
0
115
629
2010-01-10T22:17:55Z
Polas
1
Created page with '== Overview == This randomnumber[n,x] function will return a random number between ''n'' and ''x''. ''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and …'
wikitext
text/x-wiki
== Overview ==
This randomnumber[n,x] function will return a random number between ''n'' and ''x''.
''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and in this case a decimal number is found.''
* '''Pass:''' Two integers defining the bounds of the random number
* '''Returns:''' A Double representing the random number
== Example ==
var a:=randomnumber[10,20];
var b:=randomnumber[0,1]
In this case, ''a'' is a whole number between 10 and 20, whereas ''b'' is a decimal number
[[Category:Function Library]]
[[Category:Maths Functions]]
6cb90765e32d4872d69aebc5d48d089e374f5cf6
Sqr
0
116
634
2010-01-10T22:18:59Z
Polas
1
Created page with '== Overview == This sqr[n] function will return the result of squaring ''n''. * '''Pass:''' An integer to square * '''Returns:''' An integer representing the squared result ==…'
wikitext
text/x-wiki
== Overview ==
This sqr[n] function will return the result of squaring ''n''.
* '''Pass:''' An integer to square
* '''Returns:''' An integer representing the squared result
== Example ==
var a:=sqr[10];
[[Category:Function Library]]
[[Category:Maths Functions]]
d1b2a82cd5ade222b2a7e2c4a0d84a671695f452
Sqrt
0
117
640
2010-01-10T22:19:58Z
Polas
1
Created page with '== Overview == This sqrt[n] function will return the result of square rooting ''n''. * '''Pass:''' An integer to find square root of * '''Returns:''' A double which is the squa…'
wikitext
text/x-wiki
== Overview ==
This sqrt[n] function will return the result of square rooting ''n''.
* '''Pass:''' An integer to find square root of
* '''Returns:''' A double which is the square root
== Example ==
var a:=sqrt[8];
[[Category:Function Library]]
[[Category:Maths Functions]]
fa258c2ddf572c206a224860376ca0ba78d24450
Input
0
118
645
2010-01-10T22:22:36Z
Polas
1
Created page with '== Overview == This input[n] function will ask the user for input via stdin, the result being placed into ''n'' * '''Pass:''' A variable for the input to be written into, of ty…'
wikitext
text/x-wiki
== Overview ==
This input[n] function will ask the user for input via stdin, the result being placed into ''n''
* '''Pass:''' A variable for the input to be written into, of type String
* '''Returns:''' Nothing
== Example ==
var f:String;
input[f];
print[f,"\n"];
[[Category:Function Library]]
[[Category:IO Functions]]
02bf12f527f11074c4211cf9246d2dbb1b1c9a72
Print
0
119
651
2010-01-10T22:24:36Z
Polas
1
Created page with '== Overview == This print[n] function will display ''n'' to stdout. The programmer can pass any number of values or variables split by '','' * '''Pass:''' A variable to display…'
wikitext
text/x-wiki
== Overview ==
This print[n] function will display ''n'' to stdout. The programmer can pass any number of values or variables split by '',''
* '''Pass:''' A variable to display
* '''Returns:''' Nothing
== Example ==
var f:="hello";
var a:=23;
print[f," ", a ," 22\n"];
[[Category:Function Library]]
[[Category:IO Functions]]
fa92ba94116fa9e3325edf1854007064f4e3a581
Readchar
0
120
656
2010-01-10T22:25:42Z
Polas
1
Created page with '== Overview == This readchar[n] function will read a character from a file with handle ''n''. The file handle maintains its position in the file, so after a call to read char th…'
wikitext
text/x-wiki
== Overview ==
This readchar[n] function will read a character from a file with handle ''n''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The file handle to read character from
* '''Returns:''' A character from the file type Char
== Example ==
var a:=openfile["hello.txt","r"];
var u:=readchar[a];
closefile[a];
[[Category:Function Library]]
[[Category:IO Functions]]
4c7f117e30f6ea7ea33775c899a5fb52f697cba2
Readline
0
121
662
2010-01-10T22:26:50Z
Polas
1
Created page with '== Overview == This readline[n] function will read a line from a file with handle ''n''. The file handle maintains its position in the file, so after a call to readline the posi…'
wikitext
text/x-wiki
== Overview ==
This readline[n] function will read a line from a file with handle ''n''. The file handle maintains its position in the file, so after a call to readline the position pointer will be incremented.
* '''Pass:''' The file handle to read the line from
* '''Returns:''' A line of the file type String
== Example ==
var a:=openfile["hello.txt","r"];
var u:=readline[a];
closefile[a];
[[Category:Function Library]]
[[Category:IO Functions]]
fe8358542582efe8d0f6fde9f86e665e31ffdc23
Pid
0
122
667
2010-01-10T22:29:17Z
Polas
1
Created page with '== Overview == This pid[] function will return the current processes' ID number. * '''Pass:''' Nothing * '''Returns:''' An integer representing the current process ID == Exampl…'
wikitext
text/x-wiki
== Overview ==
This pid[] function will return the current processes' ID number.
* '''Pass:''' Nothing
* '''Returns:''' An integer representing the current process ID
== Example ==
var a:=pid[];
[[Category:Function Library]]
[[Category:Parallel Functions]]
3b5c4c3345ec6b2a5680a10186d972de9b5ace3b
Processes
0
123
672
2010-01-10T22:30:12Z
Polas
1
Created page with '== Overview == This processes[] function will return the number of processes * '''Pass:''' Nothing * '''Returns:''' An integer representing the number of processes == Example …'
wikitext
text/x-wiki
== Overview ==
This processes[] function will return the number of processes
* '''Pass:''' Nothing
* '''Returns:''' An integer representing the number of processes
== Example ==
var a:=processes[];
[[Category:Function Library]]
[[Category:Parallel Functions]]
399338aab2c6a6c93ceb92f54544f812ea8725b4
Charat
0
124
677
2010-01-10T22:32:33Z
Polas
1
Created page with '== Overview == This charat[s,n] function will return the character at position ''n'' of the string ''s''. * '''Pass:''' A string and integer * '''Returns:''' A character == Ex…'
wikitext
text/x-wiki
== Overview ==
This charat[s,n] function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A string and integer
* '''Returns:''' A character
== Example ==
var a:="hello";
var c:=charat[a,2];
[[Category:Function Library]]
[[Category:String Functions]]
c4847ca93e6aa7299ea27812a2f3898810988a47
Lowercase
0
125
683
2010-01-10T22:33:26Z
Polas
1
Created page with '== Overview == This lowercase[s] function will return the lower case result of string or character ''s''. * '''Pass:''' A string or character * '''Returns:''' A string or chara…'
wikitext
text/x-wiki
== Overview ==
This lowercase[s] function will return the lower case result of string or character ''s''.
* '''Pass:''' A string or character
* '''Returns:''' A string or character
== Example ==
var a:="HeLlO";
var c:=lowercase[a];
[[Category:Function Library]]
[[Category:String Functions]]
1c802630ee825254629066ab2c6191af4227c4f6
Strlen
0
126
689
2010-01-10T22:34:19Z
Polas
1
Created page with '== Overview == This strlen[s] function will return the length of string ''s''. * '''Pass:''' A string * '''Returns:''' An integer == Example == var a:="hello"; var c:=strle…'
wikitext
text/x-wiki
== Overview ==
This strlen[s] function will return the length of string ''s''.
* '''Pass:''' A string
* '''Returns:''' An integer
== Example ==
var a:="hello";
var c:=strlen[a];
[[Category:Function Library]]
[[Category:String Functions]]
843d5cfc4ae39bd226f5c1917e112fe3b445bb61
Substring
0
127
694
2010-01-10T22:35:19Z
Polas
1
Created page with '== Overview == This substring[s,n,x] function will return the string at the position between ''n'' and ''x'' of ''s''. * '''Pass:''' A string and two integers * '''Returns:''' …'
wikitext
text/x-wiki
== Overview ==
This substring[s,n,x] function will return the string at the position between ''n'' and ''x'' of ''s''.
* '''Pass:''' A string and two integers
* '''Returns:''' A string which is a subset of the string passed into it
== Example ==
var a:="hello";
var c:=substring[a,2,4];
[[Category:Function Library]]
[[Category:String Functions]]
ef75e735baed1dbdf39991f12bb47270d9b5573c
Toint
0
128
699
2010-01-10T22:36:12Z
Polas
1
Created page with '== Overview == This toint[s] function will convert the string ''s'' into an integer. * '''Pass:''' A string * '''Returns:''' An integer == Example == var a:="234"; var c:=t…'
wikitext
text/x-wiki
== Overview ==
This toint[s] function will convert the string ''s'' into an integer.
* '''Pass:''' A string
* '''Returns:''' An integer
== Example ==
var a:="234";
var c:=toint[a];
[[Category:Function Library]]
[[Category:String Functions]]
9eec8db329955c2e83580f1ef1d6ed852b481fd7
Uppercase
0
129
704
2010-01-10T22:38:20Z
Polas
1
Created page with '== Overview == This uppercase[s] function will return the upper case result of string or character ''s''. * '''Pass:''' A string or character * '''Returns:''' A string or chara…'
wikitext
text/x-wiki
== Overview ==
This uppercase[s] function will return the upper case result of string or character ''s''.
* '''Pass:''' A string or character
* '''Returns:''' A string or character
== Example ==
var a:="HeLlO";
var c:=uppercase[a];
[[Category:Function Library]]
[[Category:String Functions]]
37e2caadbb6414da1c6eefab0bd06f3ec39342a9
Displaytime
0
130
709
2010-01-10T22:42:27Z
Polas
1
Created page with '== Overview == This displaytime[] function will display the timing results recorded by the function [[recordtime]] along with the process ID. This is very useful for debugging o…'
wikitext
text/x-wiki
== Overview ==
This displaytime[] function will display the timing results recorded by the function [[recordtime]] along with the process ID. This is very useful for debugging or performance testing.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
d8dcca85d820c4734ebbcbb4dca450cb0852027f
Recordtime
0
131
713
2010-01-10T22:43:25Z
Polas
1
Created page with 'This recordtime[] function record the current (wall clock) execution time upon reaching that point. This is useful for debugging or performance testing, the time records can be d…'
wikitext
text/x-wiki
This recordtime[] function record the current (wall clock) execution time upon reaching that point. This is useful for debugging or performance testing, the time records can be displayed via the [[displaytime]] function.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
43d397e5e59fa2817843dbb9c6dc9df3fcdcc31d
Exit
0
132
717
2010-01-10T22:44:17Z
Polas
1
Created page with '== Overview == This exit[] function will cease program execution and return to the operating system. From an implementation point of view, this will return ''EXIT_SUCCESS'' to t…'
wikitext
text/x-wiki
== Overview ==
This exit[] function will cease program execution and return to the operating system. From an implementation point of view, this will return ''EXIT_SUCCESS'' to the OS.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
7d7ef795686b075b0352ee5d4f9b8a9125acf38c
Oscli
0
133
721
2010-01-10T22:45:12Z
Polas
1
Created page with '== Overview == This oscli[a] function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution. * '''Pass:''' A string represent…'
wikitext
text/x-wiki
== Overview ==
This oscli[a] function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution.
* '''Pass:''' A string representing the command
* '''Returns:''' Nothing
== Example ==
var a:String;
input[a];
oscli[a];
The above program is a simple interface, allowing the user to input a command and then passing this to the OS for execution.
[[Category:Function Library]]
[[Category:System Functions]]
febfcd996352a5f1a5d226daf788f019862b3ef5
Category:Function Library
14
134
726
2010-01-10T22:47:30Z
Polas
1
Created page with 'a'
wikitext
text/x-wiki
a
86f7e437faa5a7fce15d1ddcb9eaeaea377667b8
727
726
2010-01-10T22:47:43Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Mandelbrot
0
135
729
2010-01-10T22:54:09Z
Polas
1
Created page with '== Overview == The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a f…'
wikitext
text/x-wiki
== Overview ==
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
fae869fca3b82799dcc8bad66e4eeda78d9cde5e
730
729
2010-01-10T22:58:35Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|thumb|170px|left|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
f3dc9db2ad759fb8f324342d2bd93cdc743f4c9e
731
730
2010-01-10T23:01:07Z
Polas
1
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|thumb|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
e65774145d467c5866b94323af72c405a96a9710
File:Mandle.gif
6
136
744
2010-01-10T22:55:06Z
Polas
1
Mandelbrot example written in Mesham
wikitext
text/x-wiki
Mandelbrot example written in Mesham
96c49786466d38afa546f88100b6dd44fa0e0380
Prefix sums
0
137
746
2010-01-10T23:02:45Z
Polas
1
Created page with '== Overview == Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate th…'
wikitext
text/x-wiki
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
function void main[var arga,var argb]
{
var m:=10;
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var mine:Int;
mine:= randomnumber[0,toInt[argb#1]];
var i;
for i from 0 to m - 1
{
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print[p," = ",a,"\n"];
};
};
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
6cca4cedd5b76612ce9ce82b17fc1bdec7370fc8
File:Dartboard.jpg
6
138
755
2010-01-10T23:03:26Z
Polas
1
Dartboard
wikitext
text/x-wiki
Dartboard
b560bd391a0504dee677d480d1ea12753fef21e9
Dartboard PI
0
139
757
2010-01-10T23:06:06Z
Polas
1
Created page with '== Overview == [[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]] The dartboard method to find PI is a simple algorithm to find the value of PI. At this point…'
wikitext
text/x-wiki
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
var m:=10; // number of processes
var pi:array[Double,m,1]:: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var result:array[Double,m] :: allocated[single[on[0]]];
var mypi:Double;
mypi:=0;
var p;
par p from 0 to m - 1
{
var darts:=1000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i:=0;
for i from 0 to rounds
{
mypi:= mypi + (4 * (throwdarts[darts] % darts));
};
((pi#p)#0):=(mypi % rounds);
};
result:=pi;
proc 0
{
var avepi:Double;
avepi:=0;
var j:=0;
for j from 0 to m - 1
{
var y:=(result#j);
avepi:=avepi + y;
};
avepi:=avepi % m;
print["PI = ",avepi,"\n"];
};
function Int throwdarts[var darts]
{
darts: Int :: allocated[multiple[]];
var score:=0;
var n:=0;
for n from 0 to darts
{
var r:=randomnumber[0,1]; // random number between 0 and 1
var xcoord:=(2 * r) - 1;
r:=randomnumber[0,1]; // random number between 0 and 1
var ycoord:=(2 * r) - 1;
if ((sqr[xcoord] + sqr[ycoord]) < 1)
{
score:=score + 1; // hit the dartboard!
};
};
return score;
};
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
38fa21caf88ce2425ee89f83c337214ecfa19db6
Prime factorization
0
140
766
2010-01-10T23:07:41Z
Polas
1
Created page with '== Overview == This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communi…'
wikitext
text/x-wiki
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
var n:=976; // this is the number to factorize
var m:=12; // number of processes
var s:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var k:=p;
var divisor;
var quotient:Int;
while (n > 1)
{
divisor:= getprime[k];
quotient:= n % divisor;
var remainder:= mod[n,divisor];
if (remainder == 0)
{
n:=quotient;
} else {
k:=k + m;
};
(s :: allreduce["min"]):=n;
if ((s==n) && (quotient==n))
{
print[divisor,","];
};
n:=s;
};
};
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
ef796ecdaccddf52f8b7e27363e709fe9ac234aa
Template:Examples
10
12
68
67
2010-01-10T23:08:24Z
Polas
1
wikitext
text/x-wiki
*[[NPB|NASA's Parallel Benchmarks]]
*[[Mandelbrot]]
*[[Image_processing|Image Processing With Filters]]
*[[Prefix_sums|Prefix Sums]]
*[[Dartboard_PI|Dartboard method to find PI]]
*[[Prime_factorization|Prime Factorization]]
a46d634ba2e0640e1df1f6301f91a7b39863f9d7
69
68
2010-01-10T23:09:10Z
Polas
1
wikitext
text/x-wiki
*[[NPB|NASA's Benchmarks]]
*[[Mandelbrot]]
*[[Image_processing|Image Processing]]
*[[Prefix_sums|Prefix Sums]]
*[[Dartboard_PI|Dartboard method find PI]]
*[[Prime_factorization|Prime Factorization]]
660f99cf2aa248fbc5d790c9564479e0fe4f6975
File:Imagep.jpg
6
141
773
2010-01-10T23:10:32Z
Polas
1
Example of high and low pass filters operating on an image
wikitext
text/x-wiki
Example of high and low pass filters operating on an image
44ca822d7d041388db2e0768c033edc01be7d571
Image processing
0
142
775
2010-01-10T23:15:19Z
Polas
1
Created page with '== Overview == This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and whit…'
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg]] </center>
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
f7613805790c2194a5301267c1b9f0cc0f22df61
776
775
2010-01-10T23:15:51Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
6d89f093ee60304104843fbbf7ab76ba2648cf75
Functions
0
38
204
203
2010-01-10T23:17:15Z
Polas
1
moved [[Procedures]] to [[Functions]]
wikitext
text/x-wiki
== Syntax ==
function returntype name[arguments]
== Semantics ==
In a function all arguments are pass by reference (even constants). If the type of argument is a type chain (requires ''::'') then it should be declared in the body
== Example ==
function Int add[var a:Int,var b:Int]
{
return a + b;
};
This function takes two integers and will return their sum.
== The main function ==
Returns void, and like C, it can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name.
[[Category:Core Mesham]]
6fe7912314f0d2cab8b1331a180b15b6b0490a05
Procedures
0
143
792
2010-01-10T23:17:15Z
Polas
1
moved [[Procedures]] to [[Functions]]
wikitext
text/x-wiki
#REDIRECT [[Functions]]
7a7b5cb084fd2aa6ee3ba6b684ea45d8d1eea795
Template:Documentation
10
14
87
86
2010-01-10T23:17:48Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[Overview]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types|Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Functions]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Composite Types|Composite Types]]
*[[:Category:Function Library|Function Library]]
e2e1c30258b3604b8f4ef1947cb909ae5ecbcd3c
88
87
2010-01-11T14:07:07Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[The Compiler]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types|Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Functions]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Composite Types|Composite Types]]
*[[:Category:Function Library|Function Library]]
4b58c30dc077b9357cc03027b5fd64ef11180c03
NAS-IS Benchmark
0
144
794
2010-01-11T12:14:20Z
Polas
1
Created page with '== Overview == NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementatio…'
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of both Mesham and the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
52275474e1eecd6eff0d65955a0148388d40641e
795
794
2010-01-11T12:14:36Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of both Mesham and the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
bba76063cf6891bc557e9162736e65c7dbb91466
796
795
2010-01-11T12:17:04Z
Polas
1
/* Notes */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
6006610cd48d28eb08524b35f8b03afde7cbef89
Template:Downloads
10
11
53
52
2010-01-11T12:28:22Z
Polas
1
wikitext
text/x-wiki
*[[Download_0.5|All (version 0.5)]]
*[[Download_rtl_0.2|Runtime Library (version 0.1)]]
<hr>
*[[Download_all|All (version 0.41b)]]
*[[Download_rtl_0.1|Runtime Library (version 0.1)]]
6514f8a30f9e31fc0f2a915f28a92379f42c0dae
54
53
2010-01-11T12:29:07Z
Polas
1
wikitext
text/x-wiki
*[[Download_0.5|All (''version 0.5'')]]
*[[Download_rtl_0.2|Runtime Library 0.2]]
<hr>
*[[Download_all|All (''version 0.41b'')]]
*[[Download_rtl_0.1|Runtime Library 0.1]]
e9043ae61cfb45e881adc7d2203616da57f7012d
Download rtl 0.1
0
145
809
2010-01-11T12:33:07Z
Polas
1
Created page with '''Please note: This version is now depreciated, please install version 0.2 if possible'' == Runtime Library Version 0.1 == This is the Mesham Runtime Library Version 0.1 and th…'
wikitext
text/x-wiki
''Please note: This version is now depreciated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
b7f3fde173ad68e9c2e8e8e956657eed13918b68
Parallel Computing
0
146
817
2010-01-11T13:02:05Z
Polas
1
Created page with '== Parallel Computing == Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to high…'
wikitext
text/x-wiki
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
==== Co-Array Fortran ====
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
400118540b7ed5abf3ce976fa973492bb9bb7b10
818
817
2010-01-11T13:02:53Z
Polas
1
wikitext
text/x-wiki
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
=== Co-Array Fortran ===
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
352195ba76ceab3a77b8d301956654c20c5bc9e5
File:Pram.gif
6
147
823
2010-01-11T13:12:22Z
Polas
1
Parallel Random Access Machine
wikitext
text/x-wiki
Parallel Random Access Machine
b7936ec07dfd143609eabc6862a0c7fa0f6b8b17
File:Messagepassing.gif
6
148
825
2010-01-11T13:13:46Z
Polas
1
Message Passing based communication
wikitext
text/x-wiki
Message Passing based communication
78f5d58106e6dcbc6620f6143e649e393e3eae10
Communication
0
149
827
2010-01-11T13:14:28Z
Polas
1
Created page with '== Communication == Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to con…'
wikitext
text/x-wiki
== Communication ==
Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.
== Shared Memory ==
In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.
=== PRAM ===
The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.
<center>[[Image:pram.gif|A Parallel Random Access Machine]]</center>
=== BSP ===
Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.
=== Logic of Global Synchrony ===
Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.
=== Advantages ===
* Relatively Simple
* Convenient
=== Disadvantages ===
* Poor Performance
* Not Scalable
== Message Passing ==
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.
<center>[[Image:messagepassing.gif|Message Passing Communication Architecture]]</center>
=== Advantages ===
* Good Performance
* Scalable
=== Disadvantages ===
* Difficult to program and maintain
5f0bd169b7b48b7b76c24bee258710c371e1ec17
File:Bell.gif
6
150
832
2010-01-11T13:22:34Z
Polas
1
Decreasing performance as the number of processors becomes too great
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
File:Bell.jpg
6
151
834
2010-01-11T13:23:53Z
Polas
1
Decreasing performance as the number of processors becomes too great
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
Computation
0
152
836
2010-01-11T13:24:13Z
Polas
1
Created page with '== Flynn's Taxonomy == This is an important classification of computer architectures proposed in the 1960s. It is important to match the appropriate computation model to the pro…'
wikitext
text/x-wiki
== Flynn's Taxonomy ==
This is an important classification of computer architectures proposed in the 1960s. It is important to match the appropriate computation model to the problem being solved. The two main classifications are shown below, although many languages allow the programmer to mix these classifications and Mesham is no different.
=== Single Program Multiple Data ===
In SPMD, each process executes the same program with its own data. The benefit of SPMD is that only one set of code need be written for all processors, although this can be bloated and lacks support for optimising specific parts for specific architectures.
=== Multiple Program Multiple Data ===
In MPMD each process executes its own program and its own data. The benefit of MPMD is that it is possible to tailor the code to run efficiently on each processor and also keeps the code each processor will execute relatively small, however writing code for each processor in a large system is not practical.
== The Design of Parallelism ==
In designing how your parallel program will exploit the advantages of parallelism there are two main ways in which the parallel aspects can be designed. The actual problem type depends on which form of parallelism is to be employed.
=== Data Parallelism ===
In data parallelism each processor will execute the same instructions, but work on different data sets. For instance, with matrix multiplication, one processor may work on one section of the matrices whilst other processors work on other sections, solving the problem parallelly. As a generalisation data parallelism, which often requires an intimate knowledge of the data and explicit parallel programming, usually results in better results.
=== Task Parallelism ===
In task parallelism the program is divided up into tasks, each of which is sent to a unique processor to solve at the same time. Commonly, task parallelism can be thought of when processors execute distinct threads, or processes, and at the time of writing it is the popular way in which operating systems will take advantage of multicore processors. Task parallelism is often easier to perform but less effective than data parallelism.
== Problem Classification ==
When considering both the advantages of and how to parallelise a problem, it is important to appreciate how the problem should be decomposed across multiple processors. There are two extremes of problem classification -embarrassingly parallel problems and tightly coupled problems.
=== Embarrassingly Parallel ===
Embarrassingly parallel problems are those which require very little or no work to separate them into a parallel form and often there will exist no dependenciess or communication between the processors. There are numerous examples of embarrassingly parallel problems, many of which exist in the graphics world which is the reason why the employment of many core GPUs has become a popular performance boosting choice.
=== Tightly Coupled Problems ===
The other extreme is that of tightly coupled problems, where it is very difficult to parallelise the problem and, if achieved, will result in many dependencies between processors. In reality most problems sit somewhere between these two extremes.
== Law of Diminishing Returns ==
There is a common misconception that "throwing" processors at a problem will automatically increase performance regardless of the number of processors or the problem type. This is simply not true because compared with computation, communication is a very expensive operation. There is an optimum number of processors, after which the cost of communication outweighs the saving in computation made by adding an extra processor and the performance drops. The figure below illustrates a performance vs processors graph for a typical problem. As the number of processors are increased, firstly performance improves, however, after reaching an optimum point performance will then drop off. It is not uncommon in practice for the performance on far too many processors to be very much worse than it was on one single processor!
<center>[[Image:bell.jpg|As the number of processors goes too high performance will drop]]</center>
In theory a truly embarrassingly parallel problem (with no communication between processors) will not be subject to this rule, and it will be more and more apparent as the problem type approaches that of a tightly coupled problem. The problem type, although a major consideration, is not the only factor at play in shaping the performance curve - other issues include the types of processors, connection latency and workload of the parallel cluster will cause variations to this common bell curve.
46f846dcf1332605ebf2a3b1de61df6bde9929fd
Template:Introduction
10
10
47
46
2010-01-11T13:25:07Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*Type Oriented Programming
**[[Type Oriented Programming Concept|The Concept]]
**[[Type Oriented Programming Uses|Uses]]
**[[Type Oriented Programming Why Here|Why Use it Here?]]
fc32d60245d95d152c4251792d88dfdaae31cbf9
Type Oriented Programming Concept
0
153
839
2010-01-11T13:35:02Z
Polas
1
Created page with '== Type Oriented Programming == Much work has been done investigating programming paradigms. Common paradigms include imperative, functional, object oriented and aspect oriented…'
wikitext
text/x-wiki
== Type Oriented Programming ==
Much work has been done investigating programming paradigms. Common paradigms include imperative, functional, object oriented and aspect oriented. However, we have developed the idea of type oriented programming. Taking the familiar concept of a type we have associated in depth runtime semantics with such, so that the behaviour of variable usage (i.e. access and assignment) can be determined by analysing the specific type. In many languages there is the requirement to combine a number of attributes with a variable, to this end we allow for the programmer to combine types together to form a supertype (type chain.)
== Type Chains ==
A type chain is a collection of types, combined together by the programmer. It is this type chain that will determine the behaviour of a specific variable. Precidence in the type chain is from right to left (i.e. the last added type will override behaviour of previously added types.) This precidence allows for the programmer to add additonal information, either perminantly or for a specific expression, as the code progresses.
== Type Variables ==
Type variables are an interesting concept. Similar to normal program variables they are declared to hold a type chain. Throughout program execution they can be dealt with like normal program variables and can be checked via conditionals, assigned and modified.
== Advantages of the Approach ==
There are a number of advantages to type oriented programming
* Efficiency - The rich amount of information allows the compiler to perform much static analysis and optimisation resulting in increased efficiency.
* Simplicity - By providing a clean type library the programmer can use well documented types to control many aspects of their code.
* Simpler language - By taking the majority of the language complexity away and placing it into a loosely coupled type library, the language is simplier from a design and implementation (compiler's) point of view. Adding numerous language keywords often results in a brittle design, using type oriented programming this is avoided
* Maintainability - By changing the type one can have considerable effect on the semantics of code, by abstracting the programmer away this makes the code simpler, more flexible and easier to maintain.
72c890484f5541b0ec19377ac9963a3f21881732
Template:In Development
10
13
76
75
2010-01-11T13:41:48Z
Polas
1
wikitext
text/x-wiki
*Mesham
**[[General Additions]]
**[[Extentable Types]]
**[[Wish List]]
*[[New Compiler]]
6052f59b4a27cff1991f5eb566dfcd73713e543d
77
76
2010-01-11T13:45:08Z
Polas
1
wikitext
text/x-wiki
*Mesham
**[[General Additions]]
**[[Extentable Types]]
*[[New Compiler]]
cce614e664483ee181a93bcde8e9e1f3555465af
78
77
2010-01-11T13:49:50Z
Polas
1
wikitext
text/x-wiki
*Mesham
**[[General Additions]]
**[[Extendable Types]]
*[[New Compiler]]
4d25cd2ec6e8a87ac0b007ac4b25dc6f84ecafa5
Extendable Types
0
154
843
2010-01-11T13:44:35Z
Polas
1
Created page with 'A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the c…'
wikitext
text/x-wiki
A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the compiler level, this is not a major issue at the moment due to generality of the type library however it does limit the language somewhat. Whilst it is relatively simple to create new types in this way, one can not expect the programmer to have to modify the compiler in order to support the codes they wish to write. There are a number of issues to consider however in relation to this aim.
* How to implement this efficiently?
* How to maximise static analysis and optimisation?
* How to minimise memory footprint?
* The ideal way of structuring the programming interface?
93f5e0528a9501457c90b7b774bd2a7acd82bcaf
844
843
2010-01-11T13:49:27Z
Polas
1
moved [[Extentable Types]] to [[Extendable Types]]
wikitext
text/x-wiki
A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the compiler level, this is not a major issue at the moment due to generality of the type library however it does limit the language somewhat. Whilst it is relatively simple to create new types in this way, one can not expect the programmer to have to modify the compiler in order to support the codes they wish to write. There are a number of issues to consider however in relation to this aim.
* How to implement this efficiently?
* How to maximise static analysis and optimisation?
* How to minimise memory footprint?
* The ideal way of structuring the programming interface?
93f5e0528a9501457c90b7b774bd2a7acd82bcaf
General Additions
0
155
848
2010-01-11T13:48:28Z
Polas
1
Created page with '== Accepted Additions == # [[Extendable types]] - 0% # Structure IO types - 0% # Addtional distribution types - 0% # Group keyword - 0% == Wish List == Please add here any fea…'
wikitext
text/x-wiki
== Accepted Additions ==
# [[Extendable types]] - 0%
# Structure IO types - 0%
# Addtional distribution types - 0%
# Group keyword - 0%
== Wish List ==
Please add here any features you would like to see in the upcomming development of Mesham
0266ec38c2fbd122a77b6c59724e803bbebe8ac0
Extentable Types
0
156
853
2010-01-11T13:49:27Z
Polas
1
moved [[Extentable Types]] to [[Extendable Types]]
wikitext
text/x-wiki
#REDIRECT [[Extendable Types]]
3b199f3fd3cfdb26ed0551cf6bc5565500055b0d
New Compiler
0
157
855
2010-01-11T13:53:29Z
Polas
1
Created page with 'The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the co…'
wikitext
text/x-wiki
The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the compilation phase. To this end we are looking to reimplement the compiler in C. This reimplementation will allow us to combine all aspects of the compiler in one package, remove depreciated implementation code, clean up aspects of the compilation process, fix compiler bugs and provide a structured framework from which types can fit in.
Like previous versions of the compiler, the results will be completely portable.
This page will be updated with news and developments in relation to this new compiler implementation.
7b053ed26969c5307ab052346348dc8892c19922
Download 0.41 beta
0
37
194
193
2010-01-11T13:54:30Z
Polas
1
wikitext
text/x-wiki
''Please Note: This version of Mesham is depreciated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
b89d99655c15cf04c94812b4efd6ecb05e694560
Download 0.5
0
158
860
2010-01-11T14:01:51Z
Polas
1
Created page with '== Version 0.5 == Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of th…'
wikitext
text/x-wiki
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
2f7e5b52affe0f39c59a200ddbdb768e1eb184b5
Download rtl 0.2
0
159
869
2010-01-11T14:04:50Z
Polas
1
Created page with '== Runtime Library Version 0.2 == Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many…'
wikitext
text/x-wiki
== Runtime Library Version 0.2 ==
Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[All 0.5]] page.
8c4b0029b947a3b3a59e39b631ac04cb05f51cb9
870
869
2010-01-11T14:05:21Z
Polas
1
/* Instructions */
wikitext
text/x-wiki
== Runtime Library Version 0.2 ==
Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
fde4357d4b1b187a9d79b24fb7f74210c2c76922
File:Flexdetail.jpg
6
160
876
2010-01-11T14:09:56Z
Polas
1
Flexibo translation in detail
wikitext
text/x-wiki
Flexibo translation in detail
ed996494fbc47b463d3de57ba1ef36c89c656483
File:Overview.jpg
6
161
878
2010-01-11T14:10:49Z
Polas
1
Overview of Translation Process
wikitext
text/x-wiki
Overview of Translation Process
194801d32004be3229ac704ed630d88f5ac83f55
The Arjuna Compiler
0
162
880
2010-01-11T14:16:26Z
Polas
1
Created page with '== Overview == Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works. The core translator produces ANSI stand…'
wikitext
text/x-wiki
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* -o [name] ''Select output filename''
* -c ''Output C code only''
* -t ''Just link and output C code''
* -e ''Display C compiler errors and warnings also''
* -s ''Silent operation (no warnings)''
* -f [args] ''Forward Arguments to C compiler''
* -pp ''Just preprocess the Mesham source and output results''
* -static ''Statically link against the runtime library''
* -shared ''Dynamically link against the runtime library (default''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
b6dc23f359ce78a0e4b970ea2b6c097c9a736b6a
The Arjuna Compiler
0
162
881
880
2010-01-11T14:17:13Z
Polas
1
/* Command Line Options */
wikitext
text/x-wiki
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
e0e6aace0107dcaf4058fe098982cb777dd940bd
882
881
2010-01-11T14:17:27Z
Polas
1
/* Command Line Options */
wikitext
text/x-wiki
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
654fa3bd1ba86157da331372ddccfec801429ad7
883
882
2010-01-12T14:12:42Z
Polas
1
/* Command Line Options */
wikitext
text/x-wiki
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
72ac1b7ba4ecb575d185ba7bd93c390331750431
Wish List
0
163
889
2010-01-11T14:24:01Z
Polas
1
Created page with 'We have numerous items with which we appreciate any assistance, these include: * Assistance with compiler implementation * Assistance with language design * Improving the docume…'
wikitext
text/x-wiki
We have numerous items with which we appreciate any assistance, these include:
* Assistance with compiler implementation
* Assistance with language design
* Improving the documentation online
* Providing more code examples
* Improving the website
* Anything else.... ''just tell us you are working on it''
b6dacc006098c5353ec884c202986556c1544d52
MediaWiki:Sidebar
8
164
891
2010-01-11T14:31:54Z
Polas
1
Created page with '* navigation ** mainpage|mainpage-description ** downloads|Downloads ** currentevents-url|currentevents ** recentchanges-url|recentchanges ** randompage-url|randompage ** helppag…'
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** downloads|Downloads
** currentevents-url|currentevents
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
53c7278e98126b6dbcce97519f5afab9ba4bd0ea
892
891
2010-01-11T14:34:48Z
Polas
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** downloads|Downloads
** [[What is Mesham]]
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
109252e01a5ba36e7dc6d453f2e3e7f32d3a4ac6
893
892
2010-01-11T14:35:06Z
Polas
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** downloads|Downloads
** What is Mesham
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
6b0adb1a254fb499ceda77d441efff09a523c45b
894
893
2010-01-11T14:35:26Z
Polas
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** downloads|Downloads
** What is Mesham|What is Mesham
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
75d8dbed3fe525b5805fb64369c157b4c974c204
Downloads
0
165
896
2010-01-11T14:33:01Z
Polas
1
Created page with '''This page contains all the downloads available on this website'' == Compiler Files == Version 0.5 Runtime Library 0.2 Version 0.41(b) Runtime Library 0.1 == Example Files ==…'
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Version 0.5
Runtime Library 0.2
Version 0.41(b)
Runtime Library 0.1
== Example Files ==
== Misc ==
b9cd7cce15c2f6090c1ff23d3b2dc69d29d37a37
897
896
2010-01-11T17:33:43Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Version 0.5
Runtime Library 0.2
[http://www.mesham.com/downloads/all.zip Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1]
== Example Files ==
== Misc ==
a4054fe843b268078721c7e942f701b6f8456d4c
898
897
2010-01-11T17:35:15Z
Polas
1
/* Example Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Version 0.5
Runtime Library 0.2
[http://www.mesham.com/downloads/all.zip Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1]
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
77e3f4cc83c8fb66e3e38a38e519fc6479634030
899
898
2010-01-11T17:35:39Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Mesham Version 0.5
Runtime Library 0.2
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1]
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
04fb6662d8f172e566f71065e6066969c57cec8e
900
899
2010-01-11T17:36:28Z
Polas
1
/* Misc */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Mesham Version 0.5
Runtime Library 0.2
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1]
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
9e491949a410a430e5c99d881e9662867f180496
901
900
2010-01-12T13:57:05Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
Mesham Version 0.5
Runtime Library 0.2
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1 source]
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary]
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
5750825315ceed449b1ba7c4f501d3d153b242f6
902
901
2010-01-12T14:16:44Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''latest'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''latest'''
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)]
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1 source]
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary]
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
f9ec05361f08bdf0f36a05735a9a0f6d249d128e
MediaWiki:Sitenotice
8
2
3
2
2010-01-11T14:36:53Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Download rtl 0.1
0
145
810
809
2010-01-11T17:20:36Z
Polas
1
/* Download */
wikitext
text/x-wiki
''Please note: This version is now depreciated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries.zip Runtime Library here]
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
0b3d71bd7af8257285b2e68f505fe1ae49986ea0
811
810
2010-01-12T13:53:49Z
Polas
1
/* Download */
wikitext
text/x-wiki
''Please note: This version is now depreciated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here]
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
61f6ba067eb7d123002a43dd7d1a1069d525e337
812
811
2010-01-12T13:56:08Z
Polas
1
/* Download */
wikitext
text/x-wiki
''Please note: This version is now depreciated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here] ''(Source cross platform compatible.)''
You can download version 0.1 [http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library here] ''(Binary for Windows 32 bit.)''
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
52e3d14eb89a465c5658de7cdc2798f69bb4730c
813
812
2010-01-12T13:56:25Z
Polas
1
/* Download */
wikitext
text/x-wiki
''Please note: This version is now depreciated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here] ''(Source cross platform compatible.)''
You can download version 0.1 of the [http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library here] ''(Binary for Windows 32 bit.)''
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
c54263099e60c6a10ee6156a85d8c39f0fbdbcd8
Image processing
0
142
777
776
2010-01-11T17:22:56Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
b9e133f5b1b8a50e36ebd204fab00c2ad4bbfdde
778
777
2010-01-11T17:55:22Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:2gb.jpg|500px|left|Fast Fourier Transformation with 2GB of data]]
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
b17302f5cdb18edd75ea8c9e0b0b4a5f5bca5318
779
778
2010-01-11T17:56:12Z
Polas
1
/* Performance */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:2gb.jpg|500px|left|Fast Fourier Transformation with 2GB of data]]
[[Image:128.jpg|500px|right|Fast Fourier Transformation with 128MB of data]]
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
6ab1b82e97a4bd3a342fda3d8096103a065c245b
780
779
2010-01-11T17:56:47Z
Polas
1
/* Performance */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:2gb.jpg|500px|left|Fast Fourier Transformation with 2GB of data]]
[[Image:128.jpg|500px|right|Fast Fourier Transformation with 128MB of data]]
</br>
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
7088649c6d965656c75df5b9152a140c7b0ea852
781
780
2010-01-11T17:59:33Z
Polas
1
/* Performance */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:2gb.jpg|500px|left|Fast Fourier Transformation with 2GB of data]]
[[Image:128.jpg|500px|right|Fast Fourier Transformation with 128MB of data]]
<br>
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
4300de8375570ac6392aee3ee7747cee743175a6
782
781
2010-01-11T18:01:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|right|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|left|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
03cb980c6d27535b1f3a86e18c56051e8e96de86
783
782
2010-01-11T18:02:15Z
Polas
1
/* Performance */
wikitext
text/x-wiki
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
c8dde381d52c65c935ad62d9b20d77deb3f7f296
Mandelbrot
0
135
732
731
2010-01-11T17:23:48Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|thumb|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
ed22e48497e29e0a70b5c4dfc219351145996fb0
733
732
2010-01-11T18:00:42Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
7f3a13babef891e13937526a0f76d19e24ea82ae
734
733
2010-01-11T18:01:04Z
Polas
1
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|170px|left|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
b077ff12e1edc6496e3f954f7b6272f5640cbc7b
735
734
2010-01-11T18:07:16Z
Polas
1
wikitext
text/x-wiki
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
3f38dd75afaef02bd0a147a15e49927b8bbedb7b
Prefix sums
0
137
747
746
2010-01-11T17:25:02Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
function void main[var arga,var argb]
{
var m:=10;
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var mine:Int;
mine:= randomnumber[0,toInt[argb#1]];
var i;
for i from 0 to m - 1
{
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print[p," = ",a,"\n"];
};
};
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
c13db7a81db6a0f0b48acc45bcaa77d0095a574b
Dartboard PI
0
139
758
757
2010-01-11T17:25:52Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
var m:=10; // number of processes
var pi:array[Double,m,1]:: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var result:array[Double,m] :: allocated[single[on[0]]];
var mypi:Double;
mypi:=0;
var p;
par p from 0 to m - 1
{
var darts:=1000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i:=0;
for i from 0 to rounds
{
mypi:= mypi + (4 * (throwdarts[darts] % darts));
};
((pi#p)#0):=(mypi % rounds);
};
result:=pi;
proc 0
{
var avepi:Double;
avepi:=0;
var j:=0;
for j from 0 to m - 1
{
var y:=(result#j);
avepi:=avepi + y;
};
avepi:=avepi % m;
print["PI = ",avepi,"\n"];
};
function Int throwdarts[var darts]
{
darts: Int :: allocated[multiple[]];
var score:=0;
var n:=0;
for n from 0 to darts
{
var r:=randomnumber[0,1]; // random number between 0 and 1
var xcoord:=(2 * r) - 1;
r:=randomnumber[0,1]; // random number between 0 and 1
var ycoord:=(2 * r) - 1;
if ((sqr[xcoord] + sqr[ycoord]) < 1)
{
score:=score + 1; // hit the dartboard!
};
};
return score;
};
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here]
e021275d40433f65bfe95c630d93168af80316b8
Prime factorization
0
140
767
766
2010-01-11T17:26:51Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
var n:=976; // this is the number to factorize
var m:=12; // number of processes
var s:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var k:=p;
var divisor;
var quotient:Int;
while (n > 1)
{
divisor:= getprime[k];
quotient:= n % divisor;
var remainder:= mod[n,divisor];
if (remainder == 0)
{
n:=quotient;
} else {
k:=k + m;
};
(s :: allreduce["min"]):=n;
if ((s==n) && (quotient==n))
{
print[divisor,","];
};
n:=s;
};
};
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here]
9ec4732181a24b4baf59fee565fe7efcae6889c8
Communication
0
149
828
827
2010-01-11T17:31:11Z
Polas
1
/* PRAM */
wikitext
text/x-wiki
== Communication ==
Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.
== Shared Memory ==
In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.
=== PRAM ===
The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.
<center>[[Image:pram.gif|A Parallel Random Access Machine]]</center>
Incidentally, you can download a PRAM simulator (and very simple programming language) for it [[http://www.mesham.com/downloads/Gui.zip here]] (PRAM Simulator) and [[http://www.mesham.com/downloads/apl.zip here]] (very simple language.) This simulator, written in Java, implements a parallel version of the MIPS architecture. The simple language for it (APL) is cross compiled using GNU's cross assembler.
=== BSP ===
Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.
=== Logic of Global Synchrony ===
Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.
=== Advantages ===
* Relatively Simple
* Convenient
=== Disadvantages ===
* Poor Performance
* Not Scalable
== Message Passing ==
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.
<center>[[Image:messagepassing.gif|Message Passing Communication Architecture]]</center>
=== Advantages ===
* Good Performance
* Scalable
=== Disadvantages ===
* Difficult to program and maintain
db278209d1b54fd6c702b0b8c1c67eb0efdbf8fc
829
828
2010-01-11T17:31:45Z
Polas
1
/* PRAM */
wikitext
text/x-wiki
== Communication ==
Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.
== Shared Memory ==
In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.
=== PRAM ===
The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.
<center>[[Image:pram.gif|A Parallel Random Access Machine]]</center>
Incidentally, you can download a PRAM simulator (and very simple programming language) for it [http://www.mesham.com/downloads/Gui.zip here] (PRAM Simulator) and [http://www.mesham.com/downloads/apl.zip here] (very simple language.) This simulator, written in Java, implements a parallel version of the MIPS architecture. The simple language for it (APL) is cross compiled using GNU's cross assembler.
=== BSP ===
Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.
=== Logic of Global Synchrony ===
Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.
=== Advantages ===
* Relatively Simple
* Convenient
=== Disadvantages ===
* Poor Performance
* Not Scalable
== Message Passing ==
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.
<center>[[Image:messagepassing.gif|Message Passing Communication Architecture]]</center>
=== Advantages ===
* Good Performance
* Scalable
=== Disadvantages ===
* Difficult to program and maintain
af7187ab56364cec775fc0413bf846b88b48ddc9
File:2gb.jpg
6
166
911
2010-01-11T17:52:37Z
Polas
1
Fast Fourier Transformation with 2GB of data
wikitext
text/x-wiki
Fast Fourier Transformation with 2GB of data
729d28baa79fd9f53106a7732768ce410b323819
File:128.jpg
6
167
913
2010-01-11T17:55:51Z
Polas
1
Fast Fourier Transformation example performed with 128MB data
wikitext
text/x-wiki
Fast Fourier Transformation example performed with 128MB data
9673f48589455b2c2e20aa52d4982130e782a79c
File:Mandlezoom.jpg
6
168
915
2010-01-11T18:03:30Z
Polas
1
Mandelbrot Performance Tests
wikitext
text/x-wiki
Mandelbrot Performance Tests
56594bf810192a48e1ce114b660f32c20a23f5a8
File:Classc.jpg
6
169
917
2010-01-11T18:10:20Z
Polas
1
NASA's Parallel Benchmark IS class C
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class C
67f08d79b2a9e83a032fb5034744f2ce3905862e
File:Classb.jpg
6
170
919
2010-01-11T18:11:45Z
Polas
1
NASA's Parallel Benchmark IS class B
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class B
8d320be9de4ed6ba04c6c52f56a8c0132f826055
File:Total.jpg
6
171
921
2010-01-11T18:12:32Z
Polas
1
NASA's Parallel Benchmark IS Total Million Operations per Second
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Total Million Operations per Second
e52f52f4684a6027386206f785248aa917b0cfa9
File:Process.jpg
6
172
923
2010-01-11T18:13:04Z
Polas
1
NASA's Parallel Benchmark IS Million Operations per Second per Process
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Million Operations per Second per Process
5b31c180dca090e6f04338f0483305428ace98e5
NAS-IS Benchmark
0
144
797
796
2010-01-11T18:14:29Z
Polas
1
/* Performance Results */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|600px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|600px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|600px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|600px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
9acd60bb8d5e6531dbfc1b31cb2d0842820abf25
798
797
2010-01-11T18:15:06Z
Polas
1
/* Performance Results */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|550px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|550px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|550px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|550px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
87e1574f7fd87af1910962660a2f2389cd7746e7
Download 0.41 beta
0
37
195
194
2010-01-12T13:52:53Z
Polas
1
/* Download */
wikitext
text/x-wiki
''Please Note: This version of Mesham is depreciated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
97bb7ba3753593482cfde2ad3aae8ca336ad75bf
Download 0.5
0
158
861
860
2010-01-12T14:08:04Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [[http://www.mesham.com/downloads/mesham5.tar Mesham 0.5 here]] (2MB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
f92c0d0c5c0c5d3e556d6cc5f52ab007c0880bba
862
861
2010-01-12T14:08:25Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar Mesham 0.5 here] (2MB tarball)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
7cd24c7f6a0385ebe54d3efd314802a155c2f446
863
862
2010-01-12T14:09:45Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
2563ecc4aca7acd26ca145276e138e52e1946052
Download rtl 0.2
0
159
871
870
2010-01-12T14:15:32Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Runtime Library Version 0.2 ==
Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
You can download the [http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2 here] (28KB)
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
b8d2567efe811c8e970c31212b8b977d929460d3
Downloads
0
165
903
902
2010-01-12T14:17:23Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''latest'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''latest'''
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)] ''depreciated''
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1 source] ''depreciated''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''depreciated''
== Example Files ==
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
9e2e3bd444ee0e9003c580d6dc4f7bb1022741bd
904
903
2010-01-12T14:24:40Z
Polas
1
/* Example Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''latest'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''latest'''
[http://www.mesham.com/downloads/all.zip Mesham Version 0.41(b)] ''depreciated''
[http://www.mesham.com/downloads/libraries.zip Runtime Library 0.1 source] ''depreciated''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''depreciated''
== Example Files ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
8fe5eaa60bfd5973a16e7ef79d2e21eaeee36951
905
904
2010-01-13T16:44:33Z
Polas
1
/* Compiler Files */
wikitext
text/x-wiki
''This page contains all the downloads available on this website''
== Compiler Files ==
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''latest'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''latest'''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''depreciated''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''depreciated''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''depreciated''
== Example Files ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
e645e7e06419ba5608a7eeaf912326c3d4f39a5f
906
905
2010-07-03T11:37:55Z
Polas
1
wikitext
text/x-wiki
<metadesc>All the files provided for downloads involved with Mesham</metadesc>
''This page contains all the downloads available on this website''
== Compiler Files ==
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''latest'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''latest'''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''depreciated''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''depreciated''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''depreciated''
== Example Files ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
dfc7b4b79f17423ab209deffa1ac7ccf9328497c
NAS-IS Benchmark
0
144
799
798
2010-01-12T14:19:00Z
Polas
1
/* Performance Results */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
3b2d04c397b8b94e69ce42abe269522a08d906cd
800
799
2010-01-12T14:24:05Z
Polas
1
/* Download */
wikitext
text/x-wiki
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.tar.gz here]
1974625eb597061a621d2bc3cab46084a9bf5161
801
800
2010-07-03T11:36:33Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.tar.gz here]
45b70e60ee998cad217a636aba4f2d5f9186ef4a
Download 0.41 beta
0
37
196
195
2010-01-12T14:32:55Z
Polas
1
moved [[Download all]] to [[Download 0.4 beta]]
wikitext
text/x-wiki
''Please Note: This version of Mesham is depreciated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
97bb7ba3753593482cfde2ad3aae8ca336ad75bf
197
196
2010-01-12T14:33:48Z
Polas
1
moved [[Download 0.4 beta]] to [[Download 0.41 beta]]
wikitext
text/x-wiki
''Please Note: This version of Mesham is depreciated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
97bb7ba3753593482cfde2ad3aae8ca336ad75bf
198
197
2010-07-03T11:32:48Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
''Please Note: This version of Mesham is deprecated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
57e8739e78adb8e39b63e9075b8c8e8b0fb18617
Download all
0
173
925
2010-01-12T14:32:55Z
Polas
1
moved [[Download all]] to [[Download 0.4 beta]]
wikitext
text/x-wiki
#REDIRECT [[Download 0.4 beta]]
fb600f41038a18ac86401b6794c59d2416f2c8e0
Template:Downloads
10
11
55
54
2010-01-12T14:33:26Z
Polas
1
wikitext
text/x-wiki
*[[Download_0.5|All (''version 0.5'')]]
*[[Download_rtl_0.2|Runtime Library 0.2]]
<hr>
*[[Download_0.4_beta|All (''version 0.41b'')]]
*[[Download_rtl_0.1|Runtime Library 0.1]]
0e46fc8199a34e00107b7e9816099994c6f07dcd
56
55
2010-01-12T14:34:10Z
Polas
1
wikitext
text/x-wiki
*[[Download_0.5|All (''version 0.5'')]]
*[[Download_rtl_0.2|Runtime Library 0.2]]
<hr>
*[[Download_0.41_beta|All (''version 0.41b'')]]
*[[Download_rtl_0.1|Runtime Library 0.1]]
214f80ebca3d0a0806a0922266da15effc837f0f
57
56
2012-10-17T14:20:37Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_0.5|All (''version 0.5'')]]
*[[Download_rtl_0.2|Runtime Library 0.2]]
<hr>
*[[Download_0.41_beta|All (''version 0.41b'')]]
*[[Download_rtl_0.1|Runtime Library 0.1]]
8363c723a89901734cb863d7f02995b1ab51d5f2
Download 0.4 beta
0
174
927
2010-01-12T14:33:48Z
Polas
1
moved [[Download 0.4 beta]] to [[Download 0.41 beta]]
wikitext
text/x-wiki
#REDIRECT [[Download 0.41 beta]]
3c0e33103c156212989cb34560adc6935c552cd4
Parallel Computing
0
146
819
818
2010-01-12T16:09:47Z
Polas
1
/* Current Solutions */
wikitext
text/x-wiki
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
=== Co-Array Fortran ===
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
=== Unified Parallel C ===
UPC is certainly a parallel language to keep an eye on - there is much development time and effort being spent on it at the moment. UPC uses an explicit parallel execution model with shared address space. There are memory management primitives added into the language and shared memory keywords and pointers. In adding all these keywords to the language does bloat it and result in a brittle tightly coupled design. Additionally C's array model is also inherited, which is limiting in data intensive parallel computations. One must still deal with pointers and the low level challenges that these impose.
=== Titanium ===
This is an explicit parallel version of Java it is safe, portable and allows one to build complex data structures. Similar to UPC it uses a global address space with numerous keywords and constructs added to the language to support parallelism. However, OO has an imposed (hidden) cost in terms of serialising and deserialising objects. There is also literature which indicates that the JRE does not consider memory locality, which is important for performance in HPC applications working on large data sets.
b1b36173769a1c05e713141a30579e5f27919fbc
820
819
2010-01-12T16:20:37Z
Polas
1
/* Current Solutions */
wikitext
text/x-wiki
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
=== Co-Array Fortran ===
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
=== Unified Parallel C ===
UPC is certainly a parallel language to keep an eye on - there is much development time and effort being spent on it at the moment. UPC uses an explicit parallel execution model with shared address space. There are memory management primitives added into the language and shared memory keywords and pointers. In adding all these keywords to the language does bloat it and result in a brittle tightly coupled design. Additionally C's array model is also inherited, which is limiting in data intensive parallel computations. One must still deal with pointers and the low level challenges that these impose.
=== Titanium ===
This is an explicit parallel version of Java it is safe, portable and allows one to build complex data structures. Similar to UPC it uses a global address space with numerous keywords and constructs added to the language to support parallelism. However, OO has an imposed (hidden) cost in terms of serialising and deserialising objects. There is also literature which indicates that the JRE does not consider memory locality, which is important for performance in HPC applications working on large data sets.
=== ZPL ===
ZPL is an array programming language. The authors of this language have deduced that a large majority of parallel programming is done with respect to arrays of data. To this end they have created a language with specific keywords and constructs to assist in this. For instance the expression ''A=B*C'' will set array ''A'' to be that of arrays ''B'' and ''C'' multiplied. Whilst this is a useful abstraction unfortunately parallelism itself is implicit with limited control on behalf of the programmer. The net result is that much emphasis is placed upon the compiler to find the best solution and, with limited information, performance is adversely affected. Incidently, in Mesham the types have been written such that a concept such as array programming can be easily included. The same expression is perfectly acceptable to Mesham, with the complexity of the operation being handled in the type library.
=== NESL ===
NESL is a functional parallel language. Numerous people believe that functional programming is the answer to the problem of parallel languages. However, the programmer is so abstract from the actual machine it is not possible to optimise their code (they are completely reliant on the compiler's efficiency) nore is it often possible to map directly the runtime cost of an algorithm (although it is often possible to map this theoretically.) This high level of abstract means that it is difficult, in some cases impossible, for the NESL programmer to elicit high performance with current compiler technology. There also the, sometimes misguided, belief amongst programmers that functional languages are difficult to learn. Whilst this is not always the case it does put many programmer off, especially when the performance benefits of learning NESL are mediocre at best.
7ee6187bbe5dd69f0959309d35981f4e5e307db5
821
820
2010-07-03T11:30:23Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
=== Co-Array Fortran ===
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
=== Unified Parallel C ===
UPC is certainly a parallel language to keep an eye on - there is much development time and effort being spent on it at the moment. UPC uses an explicit parallel execution model with shared address space. There are memory management primitives added into the language and shared memory keywords and pointers. In adding all these keywords to the language does bloat it and result in a brittle tightly coupled design. Additionally C's array model is also inherited, which is limiting in data intensive parallel computations. One must still deal with pointers and the low level challenges that these impose.
=== Titanium ===
This is an explicit parallel version of Java it is safe, portable and allows one to build complex data structures. Similar to UPC it uses a global address space with numerous keywords and constructs added to the language to support parallelism. However, OO has an imposed (hidden) cost in terms of serialising and deserialising objects. There is also literature which indicates that the JRE does not consider memory locality, which is important for performance in HPC applications working on large data sets.
=== ZPL ===
ZPL is an array programming language. The authors of this language have deduced that a large majority of parallel programming is done with respect to arrays of data. To this end they have created a language with specific keywords and constructs to assist in this. For instance the expression ''A=B*C'' will set array ''A'' to be that of arrays ''B'' and ''C'' multiplied. Whilst this is a useful abstraction unfortunately parallelism itself is implicit with limited control on behalf of the programmer. The net result is that much emphasis is placed upon the compiler to find the best solution and, with limited information, performance is adversely affected. Incidently, in Mesham the types have been written such that a concept such as array programming can be easily included. The same expression is perfectly acceptable to Mesham, with the complexity of the operation being handled in the type library.
=== NESL ===
NESL is a functional parallel language. Numerous people believe that functional programming is the answer to the problem of parallel languages. However, the programmer is so abstract from the actual machine it is not possible to optimise their code (they are completely reliant on the compiler's efficiency) nore is it often possible to map directly the runtime cost of an algorithm (although it is often possible to map this theoretically.) This high level of abstract means that it is difficult, in some cases impossible, for the NESL programmer to elicit high performance with current compiler technology. There also the, sometimes misguided, belief amongst programmers that functional languages are difficult to learn. Whilst this is not always the case it does put many programmer off, especially when the performance benefits of learning NESL are mediocre at best.
f5d1f2a61b7e6ec48511765e0978831650b65993
Arjuna
0
175
929
2010-04-24T14:38:18Z
Polas
1
Created page with '==Introduction== The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is 0.5. The reason for the dist…'
wikitext
text/x-wiki
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is 0.5. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informale name of the language, and specifically compiler before Mesham was decided upon.
a07bc4a09d1827abba6b9a8218829fcc54cb271e
930
929
2010-04-24T14:50:32Z
Polas
1
wikitext
text/x-wiki
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is 0.5. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informale name of the language, and specifically compiler before Mesham was decided upon.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, Oubliette, is actually based on the existing RTL, but additional services are required so the Arjuna line is not forward compatible (although the Oubliette RTL should work with Arjuna.)
27c5f886aa87489d1c6877d2eb052cabcf8cab09
931
930
2010-04-24T15:01:19Z
Polas
1
wikitext
text/x-wiki
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is 0.5. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informale name of the language, and specifically compiler before Mesham was decided upon.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, Oubliette, is actually based on the existing RTL, but additional services are required so the Arjuna line is not forward compatible (although the Oubliette RTL should work with Arjuna.)
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to I
plement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created. Lastly, it is the feeling of myself that we are coming to a limit of FlexibO, the fact that hacks are required to use this language for the compiler is probably a sure indication that it is time to rewrite in a different choice.
6301d947495fe9e16b932597fd656dd4cf4e95e0
Oubliette
0
176
938
2010-04-24T15:07:29Z
Polas
1
Created page with '==Introduction== Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous Arjuna, using lessons learned and the fact th…'
wikitext
text/x-wiki
==Introduction==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous Arjuna, using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach. A major improvment will be the support for programmmers to create their own types in program code, making the type oriented paradigm more useful and acceptable to programmers.
==Progress==
Oubliette is currently in the inception phase, the pages will be updated as we progress!
edc59e5f15180462f77625307c977a432218af58
Mesham
0
5
19
18
2010-07-03T11:29:19Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high
performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Downloads}}
{{Box|subject= In Development}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= Examples}}
|}
f2617cddf8c027398fe00bfed8b8f541cd3ddf70
20
19
2010-07-03T11:29:50Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Second column -->
{{Box|subject= Downloads}}
{{Box|subject= In Development}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Forth column -->
{{Box|subject= Examples}}
|}
20a60d41174f461e2df21b773dfbf07fab3a8bf1
What is Mesham
0
15
93
92
2010-07-03T11:30:13Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, with the advent of Symmetric MultiProcessors (SMPs), a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
86e9d78efcaa5bdba32809fa5a09a761e8bbe101
Communication
0
149
830
829
2010-07-03T11:30:32Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Communication ==
Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.
== Shared Memory ==
In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.
=== PRAM ===
The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.
<center>[[Image:pram.gif|A Parallel Random Access Machine]]</center>
Incidentally, you can download a PRAM simulator (and very simple programming language) for it [http://www.mesham.com/downloads/Gui.zip here] (PRAM Simulator) and [http://www.mesham.com/downloads/apl.zip here] (very simple language.) This simulator, written in Java, implements a parallel version of the MIPS architecture. The simple language for it (APL) is cross compiled using GNU's cross assembler.
=== BSP ===
Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.
=== Logic of Global Synchrony ===
Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.
=== Advantages ===
* Relatively Simple
* Convenient
=== Disadvantages ===
* Poor Performance
* Not Scalable
== Message Passing ==
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.
<center>[[Image:messagepassing.gif|Message Passing Communication Architecture]]</center>
=== Advantages ===
* Good Performance
* Scalable
=== Disadvantages ===
* Difficult to program and maintain
155dd82514b07e687083967185f5b03adaabcc62
Computation
0
152
837
836
2010-07-03T11:30:41Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Flynn's Taxonomy ==
This is an important classification of computer architectures proposed in the 1960s. It is important to match the appropriate computation model to the problem being solved. The two main classifications are shown below, although many languages allow the programmer to mix these classifications and Mesham is no different.
=== Single Program Multiple Data ===
In SPMD, each process executes the same program with its own data. The benefit of SPMD is that only one set of code need be written for all processors, although this can be bloated and lacks support for optimising specific parts for specific architectures.
=== Multiple Program Multiple Data ===
In MPMD each process executes its own program and its own data. The benefit of MPMD is that it is possible to tailor the code to run efficiently on each processor and also keeps the code each processor will execute relatively small, however writing code for each processor in a large system is not practical.
== The Design of Parallelism ==
In designing how your parallel program will exploit the advantages of parallelism there are two main ways in which the parallel aspects can be designed. The actual problem type depends on which form of parallelism is to be employed.
=== Data Parallelism ===
In data parallelism each processor will execute the same instructions, but work on different data sets. For instance, with matrix multiplication, one processor may work on one section of the matrices whilst other processors work on other sections, solving the problem parallelly. As a generalisation data parallelism, which often requires an intimate knowledge of the data and explicit parallel programming, usually results in better results.
=== Task Parallelism ===
In task parallelism the program is divided up into tasks, each of which is sent to a unique processor to solve at the same time. Commonly, task parallelism can be thought of when processors execute distinct threads, or processes, and at the time of writing it is the popular way in which operating systems will take advantage of multicore processors. Task parallelism is often easier to perform but less effective than data parallelism.
== Problem Classification ==
When considering both the advantages of and how to parallelise a problem, it is important to appreciate how the problem should be decomposed across multiple processors. There are two extremes of problem classification -embarrassingly parallel problems and tightly coupled problems.
=== Embarrassingly Parallel ===
Embarrassingly parallel problems are those which require very little or no work to separate them into a parallel form and often there will exist no dependenciess or communication between the processors. There are numerous examples of embarrassingly parallel problems, many of which exist in the graphics world which is the reason why the employment of many core GPUs has become a popular performance boosting choice.
=== Tightly Coupled Problems ===
The other extreme is that of tightly coupled problems, where it is very difficult to parallelise the problem and, if achieved, will result in many dependencies between processors. In reality most problems sit somewhere between these two extremes.
== Law of Diminishing Returns ==
There is a common misconception that "throwing" processors at a problem will automatically increase performance regardless of the number of processors or the problem type. This is simply not true because compared with computation, communication is a very expensive operation. There is an optimum number of processors, after which the cost of communication outweighs the saving in computation made by adding an extra processor and the performance drops. The figure below illustrates a performance vs processors graph for a typical problem. As the number of processors are increased, firstly performance improves, however, after reaching an optimum point performance will then drop off. It is not uncommon in practice for the performance on far too many processors to be very much worse than it was on one single processor!
<center>[[Image:bell.jpg|As the number of processors goes too high performance will drop]]</center>
In theory a truly embarrassingly parallel problem (with no communication between processors) will not be subject to this rule, and it will be more and more apparent as the problem type approaches that of a tightly coupled problem. The problem type, although a major consideration, is not the only factor at play in shaping the performance curve - other issues include the types of processors, connection latency and workload of the parallel cluster will cause variations to this common bell curve.
e332a1953b0d7c21c48e8dcd73c7bfb0043f97ed
Type Oriented Programming Concept
0
153
840
839
2010-07-03T11:30:50Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Type Oriented Programming ==
Much work has been done investigating programming paradigms. Common paradigms include imperative, functional, object oriented and aspect oriented. However, we have developed the idea of type oriented programming. Taking the familiar concept of a type we have associated in depth runtime semantics with such, so that the behaviour of variable usage (i.e. access and assignment) can be determined by analysing the specific type. In many languages there is the requirement to combine a number of attributes with a variable, to this end we allow for the programmer to combine types together to form a supertype (type chain.)
== Type Chains ==
A type chain is a collection of types, combined together by the programmer. It is this type chain that will determine the behaviour of a specific variable. Precidence in the type chain is from right to left (i.e. the last added type will override behaviour of previously added types.) This precidence allows for the programmer to add additonal information, either perminantly or for a specific expression, as the code progresses.
== Type Variables ==
Type variables are an interesting concept. Similar to normal program variables they are declared to hold a type chain. Throughout program execution they can be dealt with like normal program variables and can be checked via conditionals, assigned and modified.
== Advantages of the Approach ==
There are a number of advantages to type oriented programming
* Efficiency - The rich amount of information allows the compiler to perform much static analysis and optimisation resulting in increased efficiency.
* Simplicity - By providing a clean type library the programmer can use well documented types to control many aspects of their code.
* Simpler language - By taking the majority of the language complexity away and placing it into a loosely coupled type library, the language is simplier from a design and implementation (compiler's) point of view. Adding numerous language keywords often results in a brittle design, using type oriented programming this is avoided
* Maintainability - By changing the type one can have considerable effect on the semantics of code, by abstracting the programmer away this makes the code simpler, more flexible and easier to maintain.
f0cf744081b43f1d0b6a7fdb9914e680825ab044
Download 0.5
0
158
864
863
2010-07-03T11:31:14Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
a828b4e0bcb4159e05b4effc2d47fb4d34c7d8f4
Download rtl 0.2
0
159
872
871
2010-07-03T11:31:22Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Runtime Library Version 0.2 ==
Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
You can download the [http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2 here] (28KB)
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
3e3e45dfa659b9e69f3efeca7f12b59cef548282
Download rtl 0.1
0
145
814
813
2010-07-03T11:32:59Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
''Please note: This version is now deprecated, please install version 0.2 if possible''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here] ''(Source cross platform compatible.)''
You can download version 0.1 of the [http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library here] ''(Binary for Windows 32 bit.)''
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
832240037a6ffd2caf01b702353ff0176dafee39
General Additions
0
155
849
848
2010-07-03T11:34:29Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Accepted Additions ==
# [[Extendable types]] - 0%
# Structure IO types - 0%
# Addtional distribution types - 0%
# Group keyword - 0%
== Wish List ==
Please add here any features you would like to see in the upcomming development of Mesham
32d8ac7166294f997a37eba0357f5053e9592159
850
849
2010-07-03T11:34:56Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Accepted Additions ==
# [[Extendable Types]] - 0%
# Structure IO types - 0%
# Addtional distribution types - 0%
# Group keyword - 0%
== Wish List ==
Please add here any features you would like to see in the upcomming development of Mesham
ec666b40f6c608e81dac3623499e9560cc7ad379
Extendable Types
0
154
845
844
2010-07-03T11:34:39Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the compiler level, this is not a major issue at the moment due to generality of the type library however it does limit the language somewhat. Whilst it is relatively simple to create new types in this way, one can not expect the programmer to have to modify the compiler in order to support the codes they wish to write. There are a number of issues to consider however in relation to this aim.
* How to implement this efficiently?
* How to maximise static analysis and optimisation?
* How to minimise memory footprint?
* The ideal way of structuring the programming interface?
1e9203f4eadad268e8b85736f76a8d3cabbee9be
New Compiler
0
157
856
855
2010-07-03T11:35:06Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the compilation phase. To this end we are looking to reimplement the compiler in C. This reimplementation will allow us to combine all aspects of the compiler in one package, remove depreciated implementation code, clean up aspects of the compilation process, fix compiler bugs and provide a structured framework from which types can fit in.
Like previous versions of the compiler, the results will be completely portable.
This page will be updated with news and developments in relation to this new compiler implementation.
aa5c376b57d6434395772936d77dd34757f34074
Introduction
0
17
102
101
2010-07-03T11:35:16Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Why==
Mesham was developed as a parallel programming language with a number of concepts in mind. From reviewing existing HPC languages it is obvious that programmers place a great deal of importance on both performance and resource usage. Due to these constraining factors, HPC code is often very complicated, laced with little efficiency tricks, which become difficult to maintain as time goes on. It is often the case that, existing HPC code (often written in C with a communications library) has reached a level of complexity that efficiency takes a hit.
==Advantages of Abstraction==
By abstracting the programmer from the low level details there are a number of advantages.
*Easier to understand code
*Quicker production time
*Portability easier to achieve
*Changes, such as data structure changes, are easier to make
*The rich parallel structure provides the compiler with lots of optimisation clues
==Important Features==
In order to produce a language which is usable by the current HPC programmers there are a number of features which we believe are critical to the language success.
*Simpler to code in
*Efficient Result
*Transparent Translation Process
*Portable
*Safe
*Expressive
==Where We Are==
This documentation, and the language, is very much work in progress. The documentation aims to both illustrate to a potential programmer the benefits of our language and approach and also to act as a reference for those using the language. There is much important development to be done on the language and tools in order to develop what has been created thus far
ba3f4f909927f49e51f081e926c2ccb27a2c6972
The Arjuna Compiler
0
162
884
883
2010-07-03T11:35:23Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
562198133541a2554eb259f16fc6bea9a8850aef
The Idea Behind Types
0
18
106
105
2010-07-03T11:35:31Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==A Type==
The concept of a type will be familar to many programmers. A large subset of languages follow the syntax [Type] [Variablename], such as "int a" or "float b", to allow the programmer to declare the variable. Such a statement affects both the compiler and runtime semantics - the compiler can perform analysis and optimisation (such as type checking) and in runtime the variable has a specific size and format. When we consider these sorts of languages, it can be thought of that the programmer provides information, to the compiler, via the type. However, there is only so much that one single type can reveal, and so languages often include numerous keywords in order to allow for the programmer to specify additional information. Taking C as an example, in order to declare a variable "m" to be a character in read only memory the programmer writes "const char m". In order to extend the language, and allow for extra variable attributes (such as where a variable is located in the parallel programming context) then new keywords would need to be introduced, which is less than ideal.
==Type Oriented Programming==
The approach adopted by Mesham is to allow the programmer to encode all variable information via the type system, by combining different types together to form a supertype (type chain.) In our language, "const char m" becomes "var m: Char :: const[]", where var m declares the variable, the operator ":" specifies the type and the operator "::" combines two types together. In this case, the supertype is that formed by combining the type Char with the type const. It should be noted that some type cohercions, such as "Int :: Char" are meaningless and so rules exist within each type to govern which combinations are allowed.
Type presidence is from right to left - in the example "Char :: const[]", it can be thought of that the read only attributes of const override the default read/write attributes of Char. Abstractly, the programmer can consider the supertype (type chain) formed to be a little bit like a linked list. For instance the supertype created by "A::B::C::D::E" is illustrated below.
<center>[[File:types.jpg|Type Chain Illustration]]</center>
==Advantages==
Using this approach many different attributes can be associated with a variable, the fact that types are loosely coupled means that the language designers can add attributes (types) with few problems, and by only changing the type of a variable the semantics can change considerably. Another advantage is that the rich information provided by the programmer allows for many optimisations to be performed during compilation that using a lower level language might not be obvious to the compiler.
==Technically==
On a more technical note, the type system implements a number of services. These are called by the core of the compiler and if the specific type does not honour that service, then the call is passed onto the next in the chain - until all are exhausted. For instance, using the types "A::B::C::D::E", if service "Q1" was called, then type "E" would be asked first, if it did not honour the service, "Q1" would be passed to type "D" - if that type did not honour it then it would be passed to type "C" and so forth.
542e7ec8569cd648c24cbb57da3a3b53d0081689
Category:Types
14
98
547
546
2010-07-03T11:35:49Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents a variable name and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
Compound types are also listed in the type library, to give the reader a flavour they may contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
| extended types
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
var i:Int :: allocated[multiple[]];
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
var m:String;
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information may not be changed.
=== Examples ===
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's type can be coerced with additional types just for that expression.
=== Example ===
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
a45670c0a6b669f82b26706980aac7de89abad8b
Functions
0
38
205
204
2010-07-03T11:36:05Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Syntax ==
function returntype name[arguments]
== Semantics ==
In a function all arguments are pass by reference (even constants). If the type of argument is a type chain (requires ''::'') then it should be declared in the body
== Example ==
function Int add[var a:Int,var b:Int]
{
return a + b;
};
This function takes two integers and will return their sum.
== The main function ==
Returns void, and like C, it can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name.
[[Category:Core Mesham]]
8dc19f214ffad748752539057bcb42dfb2005dfc
Mandelbrot
0
135
736
735
2010-07-03T11:36:41Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
b1cf16160bef1d60d6da24cfa8e49ababb2c4713
Image processing
0
142
784
783
2010-07-03T11:36:49Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
bac16350a01dda3858bf14ac2dfbbc860a961804
Prefix sums
0
137
748
747
2010-07-03T11:36:56Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
function void main[var arga,var argb]
{
var m:=10;
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var mine:Int;
mine:= randomnumber[0,toInt[argb#1]];
var i;
for i from 0 to m - 1
{
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print[p," = ",a,"\n"];
};
};
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
1a4990b8c9a3c3349ed443ed9034a10745bb2b93
Dartboard PI
0
139
759
758
2010-07-03T11:37:05Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
var m:=10; // number of processes
var pi:array[Double,m,1]:: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var result:array[Double,m] :: allocated[single[on[0]]];
var mypi:Double;
mypi:=0;
var p;
par p from 0 to m - 1
{
var darts:=1000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i:=0;
for i from 0 to rounds
{
mypi:= mypi + (4 * (throwdarts[darts] % darts));
};
((pi#p)#0):=(mypi % rounds);
};
result:=pi;
proc 0
{
var avepi:Double;
avepi:=0;
var j:=0;
for j from 0 to m - 1
{
var y:=(result#j);
avepi:=avepi + y;
};
avepi:=avepi % m;
print["PI = ",avepi,"\n"];
};
function Int throwdarts[var darts]
{
darts: Int :: allocated[multiple[]];
var score:=0;
var n:=0;
for n from 0 to darts
{
var r:=randomnumber[0,1]; // random number between 0 and 1
var xcoord:=(2 * r) - 1;
r:=randomnumber[0,1]; // random number between 0 and 1
var ycoord:=(2 * r) - 1;
if ((sqr[xcoord] + sqr[ycoord]) < 1)
{
score:=score + 1; // hit the dartboard!
};
};
return score;
};
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here]
85e8cfdf1e4204091ef7a8cb54cea1ca7ba8e871
Prime factorization
0
140
768
767
2010-07-03T11:37:14Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
var n:=976; // this is the number to factorize
var m:=12; // number of processes
var s:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var k:=p;
var divisor;
var quotient:Int;
while (n > 1)
{
divisor:= getprime[k];
quotient:= n % divisor;
var remainder:= mod[n,divisor];
if (remainder == 0)
{
n:=quotient;
} else {
k:=k + m;
};
(s :: allreduce["min"]):=n;
if ((s==n) && (quotient==n))
{
print[divisor,","];
};
n:=s;
};
};
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here]
4e9312043674ea58b1961f62adecba9cb7cc4812
Specification
0
177
969
2012-10-17T14:13:50Z
Polas
1
Created page with '<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc> …'
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Version 1.0 ==
The latest version of the language specification, 1.0a_2 is available for download. Please note that this is an alpha version and as such the specification is liable to change.
720d5a2ed3780d21ce03f81d53321d7005b76ee0
970
969
2012-10-17T14:19:44Z
Polas
1
/* Version 1.0 */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Version 1.0 ==
The latest version of the language specification, 1.0a_2 is available for download. Please note that this is an alpha version and as such the specification is liable to change.
You can download [http://www.mesham.com/downloads/specification1a2.pdf this latest version here]
cd4f1555577db78be7f44f2e136187452f88d7f1
971
970
2012-10-17T14:25:32Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Version 1.0a_2 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_2 is available for download. This version was released October 2012 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a2.pdf this latest version here]
51f8d4940ee9e573e87d8fb7717fd58131d8ce77
972
971
2012-10-17T14:25:48Z
Polas
1
/* Version 1.0a_2 */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Version 1.0a_2 - October 2012 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_2 is available for download. This version was released October 2012 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a2.pdf this latest version here]
df64852ae0849b96ea348cb57a65f99109fad6a6
973
972
2012-10-17T14:26:23Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
The latest version of the Mesham language specification is 1.0a_2
== Version 1.0a_2 - October 2012 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_2 is available for download. This version was released October 2012 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a2.pdf this latest version here]
597b5c62b0bca7b6176d63155a2f798915ab7620
What is Mesham
0
15
94
93
2012-10-17T15:42:48Z
mesham>MeredithLester
0
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, with the advent of Symmetric MultiProcessors (SMPs), a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. I can be completely sure that term papers at the [http://bestwritingservice.com custom writing] service have no gaps. Furthermore, I can trust custom papers writing organizations because my fellow works there! Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
04ce07cd941197978f1777287e074d02fd5898e8
95
94
2012-11-02T14:11:56Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, with the advent of Symmetric MultiProcessors (SMPs), a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
86e9d78efcaa5bdba32809fa5a09a761e8bbe101
CandyStrawser428
0
178
980
2012-10-18T05:06:18Z
mesham>CandyStrawser428
0
Created page with 'he proportion of shops in Britain lying empty has hit a new record of 14.6% in February, according to figures compiled by the Local Data Company. Vacancy rates had begun to stab…'
wikitext
text/x-wiki
he proportion of shops in Britain lying empty has hit a new record of 14.6% in February, according to figures compiled by the Local Data Company.
Vacancy rates had begun to stabilise at the end of 2011, but they have risen in January and February, the LDC said.
It is further evidence of a difficult start to the year for retailers.
Consumer confidence also slipped back in February, the latest survey from Nationwide indicated, largely due to concerns about employment prospects.
Continue reading the main story
�Start Quote
It is a timely reminder to the government... of the significant challenges facing town and city centres up and down the country�
End Quote Matthew Hopkinson Local Data Company
* High Street casualties
* Cautious consumers 'pay off debt'
* Job woes hit consumer confidence
* Sharp decline in UK retail sales
There was an increase in the number of respondents describing their economic situation as bad.
"Consumers also scaled back their expectations for the future, with the forward-looking aspects of the index weakening during the month," said Nationwide chief economist Robert Gardner.
New figures from the Bank of England, also released on Friday, back this up.
Cautious consumers are choosing to pay off credit cards and loans, rather than take on new borrowing, the data indicate.
Evans Cycles, one of the UK's biggest bike retailers, told the BBC's Today programme that it was having to be very conscious about prices.
"We are peddling into a headwind in terms of the consumer economy," said chief executive Nick Wilkinson. "Confidence remains low, getting people to spend money on a bike is about persuading them that it is value for money."
However, Nationwide added that the number of consumers planning to buy household goods - an indicator of confidence - was higher in February than a year earlier.
This reflects official retail sales data for the month, published by the Office for National Statistics (ONS) on Thursday.
Sales volumes declined by a larger-than-expected 0.8% in February, the ONS said.
But they were still 1% higher than a year earlier.
'Damaged' High Streets
The Local Data Company said the rise in empty premises was "not unexpected" as retailers continue to cut back and even go bust.
Game, which has 600 High Street branches in the UK, said this week that it was going into administration after key suppliers stopped doing business with them. It is continuing to trade while it tries to find a solution to its debt problems.
Continue reading the main story
'Most at risk towns and cities'
* Bradford
* Derby
* Wolverhampton
* Southampton
* Hull
* Sheffield
* Swindon
* Warrington
* Stockport
* Nottingham
Source: BNP Paribas Real Estate ranks retail centres according to likelihood of more shops closing and the place's ability to withstand a weakening economy
"It is a timely reminder to the government, who are due to respond to the Portas Review this month, of the significant challenges facing town and city centres up and down the country," said LDC director Matthew Hopkinson.
Retail consultant and TV presenter Mary Portas was asked by the government to look at ways to revitalise struggling town centres.
Her report, published in December 2011, recommended cutting regulations and proposed a bigger role for street markets in town centres.
"It's crucial that the government responds to Mary's review with a menu of recommendations next week that local people, councils and businesses can 'pick and mix' to help start to reverse the damage that many of our high streets have suffered," said Ian Fletcher, director of policy at the British Property Federation.
784ade45552b0b3fd4e6d7259c9083a2102be78d
Specification
0
177
974
973
2012-11-26T12:07:31Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
The latest version of the Mesham language specification is 1.0a_3
== Version 1.0a_3 - November 2012 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_3 is available for download. This version was released November 2012 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a3.pdf this latest version here]
3d994ef361563bcb686303554aa6862e42fde81c
Include
0
179
982
2013-01-12T13:08:08Z
Polas
1
Created page with '== Syntax == #include [sourcefile] == Semantics == Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current l…'
wikitext
text/x-wiki
== Syntax ==
#include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
[[Category:preprocessor]]
2b129784878f24e293b397557f1823c0c5a9f773
983
982
2013-01-12T13:08:22Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
#include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
[[Category:preprocessor]]
7abe0aebde521ddfa81b2067a493dd3cd42a3c49
984
983
2013-01-12T13:12:06Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
[[Category:preprocessor]]
61f41248928511aa51242d95de53b7d8b58a2dea
Include once
0
180
988
2013-01-12T13:13:58Z
Polas
1
Created page with '== Syntax == <nowiki>#</nowiki>include_once [sourcefile] == Semantics == Will read in the Mesham source file specified and will embed the contents of this source file into the…'
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include_once [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location IF AND ONLY IF that specific file has not already been included before. This is a very useful mechanism to avoid duplicate includes when combining together multiple libraries.
== Example ==
#include_once "test.mesh"
#include_once "test.mesh"
The preprocessing stage will replace the first include with the contents of ''test.mesh'', but the second include_once will be ignored because that specific file has already been included. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
[[Category:preprocessor]]
cc252841c9d207d70f0aa9d49649351ca7b39caa
Par
0
39
211
210
2013-01-12T13:22:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
[[Category:Parallel]]
b32171a7719d1af159c524f392890ed7c1b5677b
212
211
2013-01-12T13:23:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br>
There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
[[Category:Parallel]]
aa6ffa1852254f94a50efcdac6334ef30a211654
213
212
2013-01-12T13:24:01Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
[[Category:Parallel]]
0d1e9ff51fab1c0918d3f78abe3d1422eeb01c6d
214
213
2013-01-12T13:32:13Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.<br><br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
[[Category:Parallel]]
755ce8c1c1a1a2eb4559e536f5578cf7ef85c16b
215
214
2013-01-12T13:32:19Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
[[Category:Parallel]]
bc175be5fc070cc7546ddc76ad3d7f66a5fb57f0
Proc
0
40
222
221
2013-01-12T13:26:46Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
[[Category:Parallel]]
f3dde76df2cee36702dc2598495f83c73dccebcc
223
222
2013-01-12T13:26:54Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
[[Category:Parallel]]
fed24badb523c6ca85f0e2b3acbb06084c065c2e
224
223
2013-01-12T13:27:07Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
[[Category:Parallel]]
a4c578edf88fd1e9c04de90230a85104c95f59c1
225
224
2013-01-12T13:32:39Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
[[Category:Parallel]]
7dbeb2f500d9c2c0953703ab53c7e43a7a5a68d0
Sync
0
41
233
232
2013-01-12T13:30:57Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
sync name;
Where the optional ''name'' is a variable.
== Semantics ==
Will synchronise processes and acts as a blocking call involving all processes. This keyword is linked with default shared memory communication and other types. Omitting the variable name will result in synchronisation for all appropriate constructs. This can be thought of as a barrier, and the value of a variable can only be guaranteed after the appropriate barrier has completed.
[[Category:Parallel]]
b3e1eb436d952ed212305d11ca3008716cc87ff5
Group
0
181
991
2013-01-12T13:42:29Z
Polas
1
Created page with '== Syntax == group n1,n2,...,nd<br> {<br> group body<br> };<br> where n1,n2,...,nd are specific process ranks, either values of variables known at compile time. == Semantics =…'
wikitext
text/x-wiki
== Syntax ==
group n1,n2,...,nd<br>
{<br>
group body<br>
};<br>
where n1,n2,...,nd are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
b29d0890048732eab12836725954a4f53bb87cd6
Assignment
0
26
138
137
2013-01-12T13:46:11Z
Polas
1
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a memory reference and ''rvalue'' a memory reference or expression
== Semantics==
Will assign a ''lvalue'' to ''rvalue''.
== Examples==
var i:=4;
var j:=i;
In this example the variable ''i'' will be declared and set to value 4, and the variable ''j'' also declared and set to the value of ''i'' (4.) Via type inference the types of both variables will be that of ''Int''
[[Category:sequential]]
3d5f4742303d04e7749c431f0f1a7fff4be923ec
Break
0
29
155
154
2013-01-12T13:46:41Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
break;
== Semantics ==
Will break out of the current enclosing loop body
== Example ==
while (true) { break; };
Only one iteration of the loop will complete, where it will break out of the body
[[Category:sequential]]
08f736e7878912592791bc8e152392934dea6380
If
0
32
171
170
2013-01-12T13:47:20Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
if (condition)<br>
{<br>
then body<br>
} else {<br>
else body<br>
};<br>
== Semantics ==
Will evaluate the condition and, if true will execute the code in the ''then body.'' Optionally, if the condition is false then the code in the ''else body'' will be executed if this has been supplied by the programmer.
== Example ==
#include <io>
if (a==b) {
print("Equal");
};
In this code example two variables ''a'' and ''b'' are tested for equality. If equal then the message will be displayed. As no else section has been specified then no specific behaviour will be adopted if they are unequal
[[Category:sequential]]
05b3be3748a88084353cd11af85a9e7b421f8282
Currenttype
0
99
553
552
2013-01-12T13:49:21Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
currentype varname;
== Semantics ==
Will return the current type of the variable.<br><br>
''Note:'' If a variable is used within a type context then this is assumed to be shorthand for the current type of that variable<br>
''Note:'' This is a static construct and hence only available during compilation. It must be statically deducible and not used in a manner that is dynamic.
== Example ==
var i: Int;
var q:currentype i;
Will declare ''q'' to be an integer the same type as ''i''.
[[Category:Sequential]]
[[Category:Types]]
ff9c3637783501ce2b74a5e07a66edf84a6e7179
Declaration
0
24
129
128
2013-01-12T13:58:26Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
All variables must be declared before they are used. In Mesham one may declare a variable via its value or explicit type.
var [varname];<br>
var [varname]:=[Value];<br>
var [varname]:[Type];<br>
== Semantics ==
The environment will map the identifier to storage location and that variable is now usable. In the case of a value being specified then the compiler will infer the type via type inference either here or when the first assignment takes place.<br><br>
''Note:'' It is not possible to declare a variable with the value ''null'' as this is a special, no value, placer and as such has no type.
== Examples ==
var a;
var b:=99;
a:="hello";
In the code example above, the variable ''a'' is declared, without any further information the type is infered by its first use (to hold type String.) Variable ''b'' is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes.
var t:Char;
var z:Char :: allocated[single[on[2]]];
Variable ''t'' is declared to be a character, without further type information it is also assumed to be on all processes (by default the type Char is allocated to all processes.) Lastly, the variable ''z'' is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
d7919935154b037a9ee87691e7585c6129c3d28f
130
129
2013-01-12T13:59:00Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
All variables must be declared before they are used. In Mesham one may declare a variable via its value or explicit type.
var name;<br>
var name:=[Value];<br>
var name:[Type];<br>
Where ''name'' is the name of the variable being declared.
== Semantics ==
The environment will map the identifier to storage location and that variable is now usable. In the case of a value being specified then the compiler will infer the type via type inference either here or when the first assignment takes place.<br><br>
''Note:'' It is not possible to declare a variable with the value ''null'' as this is a special, no value, placer and as such has no type.
== Examples ==
var a;
var b:=99;
a:="hello";
In the code example above, the variable ''a'' is declared, without any further information the type is infered by its first use (to hold type String.) Variable ''b'' is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes.
var t:Char;
var z:Char :: allocated[single[on[2]]];
Variable ''t'' is declared to be a character, without further type information it is also assumed to be on all processes (by default the type Char is allocated to all processes.) Lastly, the variable ''z'' is declared to be of type character, but is allocated only on a single process (process 2.)
[[Category:sequential]]
c43b1ec211bcbec7f7b9d1b18cab38e765f77cc7
Declaredtype
0
100
559
558
2013-01-12T15:54:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
declaredtype name
Where ''name'' is a variable name
== Semantics ==
Will return the declared type of the variable.<br><br>
''Note:'' This is a static construct only and its lifetime is limited to during compilation.
== Example ==
var i:Int;
i:i::const[];
i:declaredtype i;
This code example will firstly type ''i'' to be an [[Int]]. On line 2, the type of ''i'' is combined with the type [[const]] (enforcing read only access to the variable's data.) On line 3, the programmer is reverting the variable back to its declared type (i.e. so one can write to the data.)
[[Category:Sequential]]
[[Category:Types]]
9db238bb362786df92d1fc256d63cdc80740c263
For
0
27
144
143
2013-01-12T15:55:20Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
for i from a to b <br>
{<br>
forbody<br>
{
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
#include <io>
var i;
for i from 0 to 9 {
print(i);
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
[[Category:sequential]]
373bf13fcbee3447c8c7e2f48a5aca7be3277bc9
145
144
2013-01-12T15:55:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
for i from a to b <br>
{<br>
forbody<br>
{
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
#include <io>
#include <string>
var i;
for i from 0 to 9 {
print(itostring(i)+"\n");
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
[[Category:sequential]]
f39734ae6cedd6dc90974da176471674426a6eee
Sequential Composition
0
34
178
177
2013-01-12T15:57:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
body ; body
== Semantics ==
Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequential composition.<br><br>
''Note:'' Unlike many imperative languages, all blocks must be terminated by a form of composition (sequential or parallel.)
== Examples ==
var a:=12 ; a:=99
In the above example variable ''a'' is declared to be equal to 12, after this the variable is then modified to hold the value of 99.
function1() ; function2()
In the second example ''function1'' will execute and then after (if it terminates) the function ''function2'' will be called
[[category:sequential]]
102d3d7402e994c29120bf72cf707b2870972f4f
Throw
0
31
165
164
2013-01-12T15:57:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
#include <io>
try {
throw "an error"
} catch "an error" {
print("Error occurred!\n");
};
In this example, a programmer defined error ''an error'' is thrown and caught.
[[Category:sequential]]
6f76d3d81fa23413c5e12ddf07f112a734679024
Try
0
30
160
159
2013-01-12T15:59:43Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
try<br>
{<br>
try body<br>
} catch (error string) { <br>
error handing code<br>
}<br>
== Semantics ==
Will execute the code in the try body and handle any errors. This is very important in parallel computing as it allows the programmer to easily deal with any communication errors that may occur. Exception handling is dynamic in Mesham and the last appropriate catch block will be entered into depending on program flow.
== Error Strings ==
There are a number of error strings build into Mesham, additional ones can be specified by the programmer.
*Array Bounds - Accessing an array outside its bounds
*Divide by zero - Divide by zero error
*Memory Out - Memory allocation failure
*root Illegal - root process in communication
*rank Illegal - rank in communication
*buffer Illegal - buffer in communication
*count - Count wrong in communication
*type - Communication type error
*comm - Communication communicator error
*truncate - Truncation error in communication
*Group - Illegal group in communication
*op - Illegal operation for communication
*arg - Arguments used for communication incorrect
*oscli - Error returned by operating system when performing a system call
== Example ==
#include <io>
#include <string>
try {
var t:array[Int,10];
print(itostring(a[12]));
} catch ("Array Bounds") {
print("No Such Index\n");
};
In this example the programmer is trying to access element 12 of array ''a''. If this does not exist, then instead of that element being displayed an error message is put on the screen.
[[Category:sequential]]
3a8001a929ddcbf7cd4e1a3f15344253c6f5ec76
While
0
28
150
149
2013-01-12T16:00:07Z
Polas
1
wikitext
text/x-wiki
==Syntax==
while (condition) whilebody;
==Semantics==
Will loop whilst the condition holds.
== Examples ==
var a:=10;
while (a > 0) {
a--;
};
Will loop, each time decreasing the value of variable ''a'' by 1 until the value is too small (0)
[[Category:Sequential]]
11de39ade77fa9f9236b5e6ae87b7798430d23f7
Category:Types
14
98
548
547
2013-01-12T16:02:50Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents the current type of a variable and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
Compound types are also listed in the type library, to give the reader a flavour they may contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
var i:Int :: allocated[multiple[]];
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
var m:String;
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information (via the ''allocation'' type) may not be changed. Type modification such as this binds to the current block, the type is reverted back to its previous value once that block has been left.
=== Examples ===
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's current type can be coerced with additional types just for that expression.
=== Example ===
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
60596283032831000023a13049f01496826ef39d
Type Variables
0
101
564
563
2013-01-12T16:03:28Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
typevar name::=type;
name::=type;
Note how ''::='' is used rather than '':=''
''typevar'' is the type equivalent of ''var''
== Semantics ==
Type variables allow the programmer to assign types and type combinations to variables for use as normal program variables. These exist only statically (in compilation) and are not present in the runtime semantics.
== Example ==
typevar m::=Int :: allocated[multiple[]];
var f:m;
typevar q::=declaredtype f;
q::=m;
In the above code example, the type variable ''m'' has the type value ''Int :: allocated[multiple[]]'' assigned to it. On line 2, a new (program) variable is created using this new type variable. In line 3, the type variable ''q'' is declared and has the value of the declared type of program variable ''f''. Lastly in line 4, type variable ''q'' changes its value to become that of type variable ''m''. Although type variables can be thought of as the programmer creating new types, they can also be used like program variables in cases such as equality tests and assignment.
[[Category:Types]]
dfb94a0342487a695ea3aefbfbd0592fec1624ea
Functions
0
38
206
205
2013-01-12T16:11:36Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Syntax ==
function returntype name(arguments)
== Semantics ==
The type of the variable depends on the pass semantics (by reference or value.) Broadly, all [[:Category:Element Types|element types]] types by themselves are pass by value and [[:Category:Composite Types|composite types]] are pass by reference; although this behaviour can be overridden by additional type information. Memory allocated onto the heap is pass by reference, static or stack frame memory is pass by value.
== Example ==
function Int add(var a:Int,var b:Int) {
return a + b;
};
This function takes two integers and will return their sum.
function void modify(var a:Int::heap) {
a:=88;
}
In this code example, the ''modify'' function will accept an integer variable but this is allocated on the heap (pass by reference.) The assignment will modify the value of the variable being passed in and will still be accessible once the function has terminated.
== The main function ==
Returns void and can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name. The main function is the program entry point, it is fine for this not to be present in a Mesham code as it is then just assumed that that code is a library and only accessed via linkage.
[[Category:Core Mesham]]
d5b1b620e8c825e2408104f4f495071b24ecf24b
Operators
0
43
242
241
2013-01-12T16:14:34Z
Polas
1
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#|| Logical OR
#&& Logical AND
[[Category:Core Mesham]]
2298096faf2f8a12b4874f25e5a87bb1efa56323
Int
0
45
252
251
2013-01-12T16:18:58Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number. This is also the type of integer constants.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
# [[allocated]]
# [[multiple]]
# [[stack]]
# [[onesided]]
== Example ==
var i:Int;
var b:=12;
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
49acf7d48e1041c69cbb78260b22a3b9b3c353a6
253
252
2013-01-12T16:25:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number. This is also the type of integer constants.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Int;
var b:=12;
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
ccb04090dfb755170d286acb0ec31205feb31ae1
Template:ElementTypeCommunication
10
46
259
258
2013-01-12T16:20:34Z
Polas
1
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| communication onto process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3 {
if (p==2) b:=p;
a:=b;
};
This code will result in a onesided broadcast (due to no further type information present, this is the default behaviour of element types) where process 2 will broadcast its value of ''b'' to all other processes who will write it into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''.
8b5d3a065968f2b783c804ba0673988128849b73
Double
0
48
270
269
2013-01-12T16:21:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision 64 bit floating point number. This is the type given to constant floating point numbers that appear in program code.
== Example ==
var i:Double;
In this example variable ''i'' is explicitly declared to be of type ''Double''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
95a1a188f33fd53746dabf51d68457ed4a0a0c57
File
0
52
293
292
2013-01-12T16:22:15Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
== Example ==
var i:File;
In this example variable ''i'' is explicitly declared to be of type ''File''.
== Communication ==
It is not currently possible to communicate file handles due to operating system constraints.
[[Category:Element Types]]
[[Category:Type Library]]
46e1e16033fdd6deb056c9b6dc0d38709a023523
294
293
2013-01-12T16:26:19Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:File;
In this example variable ''i'' is explicitly declared to be of type ''File''.
== Communication ==
It is not currently possible to communicate file handles due to operating system constraints.
[[Category:Element Types]]
[[Category:Type Library]]
9c1786ca5ee3f791890ebd1c484e0f8b2e3c5163
Float
0
47
264
263
2013-01-12T16:22:34Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A 32 bit floating point number
== Example ==
var i:Float;
In this example variable ''i'' is explicitly declared to be of type ''Float''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
d34b4c19e39d832e6b15628f32458ea7e7d1e2e1
265
264
2013-01-12T16:26:09Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A 32 bit floating point number
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Float;
In this example variable ''i'' is explicitly declared to be of type ''Float''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
1944adfaea938c0ac7eef93d0ff550739075093e
Char
0
50
281
280
2013-01-12T16:23:09Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An 8 bit ASCII character
== Example ==
var i:Char;
var r:='a';
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
07af34144c1b66cf03c18c8ec41d5271f7007d8a
Short
0
182
1006
2013-01-12T16:24:30Z
Polas
1
Created page with '== Syntax == Short == Semantics == A single whole, 16 bit, number. === Default typing === In the absence of further type information, the following types are added to the ch…'
wikitext
text/x-wiki
== Syntax ==
Short
== Semantics ==
A single whole, 16 bit, number.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
# [[allocated]]
# [[multiple]]
# [[stack]]
# [[onesided]]
== Example ==
var i:Short;
In this example variable ''i'' is explicitly declared to be of type ''Short''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
a8b7e564165d41ad72c0235256246bb2327217c8
1007
1006
2013-01-12T16:25:14Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Short
== Semantics ==
A single whole, 16 bit, number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Short;
In this example variable ''i'' is explicitly declared to be of type ''Short''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
558c3a95ed5d4802659c0b484d484383d4cf2884
Template:ElementDefaultTypes
10
183
1011
2013-01-12T16:24:53Z
Polas
1
Created page with 'In the absence of further type information, the following types are added to the chain: # [[allocated]] # [[multiple]] # [[stack]] # [[onesided]]'
wikitext
text/x-wiki
In the absence of further type information, the following types are added to the chain:
# [[allocated]]
# [[multiple]]
# [[stack]]
# [[onesided]]
26ad3a2871f398232ea1a7bc36c904d6aa6436a4
String
0
51
287
286
2013-01-12T16:25:27Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:String;
var p:="Hello World!";
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
3eab6a609d78781656f4403825444d9cf3d9b7c8
Long
0
53
299
298
2013-01-12T16:25:38Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Long
== Semantics ==
A long 64 bit number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Long;
In this example variable ''i'' is explicitly declared to be of type ''Long''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
f7fcbed8e4c1e3f8dd20bf3867cc810495d473d5
Double
0
48
271
270
2013-01-12T16:26:30Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision 64 bit floating point number. This is the type given to constant floating point numbers that appear in program code.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Double;
In this example variable ''i'' is explicitly declared to be of type ''Double''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
2cbed066760df566d5a243b8678febc7e27157f2
Char
0
50
282
281
2013-01-12T16:26:41Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An 8 bit ASCII character
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Char;
var r:='a';
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
f25e6c35b60a558db0387fa0e8d4c2239a4a318c
Bool
0
49
276
275
2013-01-12T16:26:52Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Bool
== Semantics ==
A true or false value
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Bool;
var x:=true;
In this example variable ''i'' is explicitly declared to be of type ''Bool''. Variable ''x'' is declared to be of value ''true'' which via type inference results in its type also becomming ''Bool''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
1cf552a72727e7223b3be7e5a64a1ab05dca94ac
Extern
0
69
369
368
2013-01-12T16:37:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
extern[]
== Semantics ==
Provided as additional allocation type information, this tells the compiler NOT to allocate memory for the variable as this has been already done externally.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
bf9f71f2b09f545278c4804cda52f88fd62faaf0
Directref
0
70
374
373
2013-01-12T16:37:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
var pid:Int :: allocated[multiple[]] :: directref[];
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Attribute Types]]
77caa37d150a1e52047d85f3dc62b4f975caca6e
375
374
2013-01-12T17:51:29Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
var pid:Int :: allocated[multiple[]] :: directref[];
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
62e75f365d5bbd8ff2ec1d13dc775c0b5877e62d
Commgroup
0
64
343
342
2013-01-12T16:41:12Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist.
== Example ==
var i:Int :: allocated[multiple[commgroup[1,3]]];
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
855d5d74076131cfaed92106651745efcbc3cc5f
344
343
2013-01-12T17:50:41Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist.
== Example ==
var i:Int :: allocated[multiple[commgroup[1,3]]];
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
6a7829704a891878a39700350e00c1a2baa406ce
Stack
0
184
1014
2013-01-12T16:45:35Z
Polas
1
Created page with '== Syntax == stack[] == Semantics == Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''al…'
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[stack];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
db2d3646da70a2e654f0ca5cd3c4e43d07a74391
1015
1014
2013-01-12T16:56:53Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[stack];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
dedcc55b4da8ea1898b42b8a1a078ce4cfcab22c
Category:Types
14
98
549
548
2013-01-12T16:47:38Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents the current type of a variable and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
All element types start with a capitalised first letter and there must be at least one element type per type chain. Compound types start with a small case and contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
Types may be referred to with or without arguments, it is therefore optional to specify square braces, ''[]'' after a type with or without data in.
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
var i:Int :: allocated[multiple[]];
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
var m:String;
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information (via the ''allocation'' type) may not be changed. Type modification such as this binds to the current block, the type is reverted back to its previous value once that block has been left.
=== Examples ===
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's current type can be coerced with additional types just for that expression.
=== Example ===
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
cd2c2389d30f0506fae55fba0187fc365f146d1f
Heap
0
185
1020
2013-01-12T16:51:13Z
Polas
1
Created page with '== Syntax == heap[] == Semantics == Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br> ''Note:'' All h…'
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
var i:Int :: allocated[heap];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
e0b3885151a47700e01a763d5fcdc6eb63ab882b
1021
1020
2013-01-12T16:57:18Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
var i:Int :: allocated[heap];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
989f25baa6c0373081f3a7b69d5da9f4c2008fce
Static
0
186
1027
2013-01-12T16:52:46Z
Polas
1
Created page with '== Syntax == static[] == Semantics == Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same p…'
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[static];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
1304702ffae55757493923d683835ad07c59ceee
1028
1027
2013-01-12T16:57:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[static];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Allocation Types]]
bcb7443639bec656f6f335d974aca5575d6a9f19
Template:DefaultMemoryAllocation
10
187
1033
2013-01-12T16:56:12Z
Polas
1
Created page with '{| border="1" cellspacing="0" cellpadding="5" align="center" ! Type ! Default allocation strategy |- | [[:Category:Element Types|All element types]] | [[Stack]] |- | [[Array]] |…'
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Type
! Default allocation strategy
|-
| [[:Category:Element Types|All element types]]
| [[Stack]]
|-
| [[Array]]
| [[Heap]]
|-
| [[Record]]
| [[Stack]]
|-
| [[Referencerecord|Reference record]]
| [[Heap]]
|}
566e725490d0f853bfaba7ca88d4f8cf04193b0a
String
0
51
288
287
2013-01-12T16:58:45Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters. All strings are immutable, concatenation of strings will in fact create a new string.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:String;
var p:="Hello World!";
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
007ddf1f82707382e3f45db5ffba7cff177e7506
Group
0
181
992
991
2013-01-12T16:59:54Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n1,n2,...,nd are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
eb2dc7f07368f56b330259638ec71e6d16bf6715
993
992
2013-01-12T17:00:14Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
2232a4680aafae95b66830cecaa53d8d46b9f7b3
Array
0
71
384
383
2013-01-12T17:04:37Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
# [[allocated]]
# [[multiple]]
# [[heap]]
# [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| Communication to process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
9efd3f179d0bf93060de542640709be733126490
385
384
2013-01-12T17:46:46Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| Communication to process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
23f80a482232c6d39109132a1027e8599eccc154
386
385
2013-01-12T17:49:20Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| Communication to process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
ec36e1e89ca8a36f66a7c7ccb955093936acaa1c
Row
0
72
392
391
2013-01-12T17:07:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In row major allocation the first dimension is the most major and the last most minor.
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
94bc339a3cfa1bb6bafc435345a141b96033d7e5
Col
0
73
398
397
2013-01-12T17:08:00Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In column major allocation the first dimension is the least major and last dimension most major
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Collection Types]]
db54144147290bc1165ea108449574d4fca99574
399
398
2013-01-12T17:50:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In column major allocation the first dimension is the least major and last dimension most major
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
221d9f7d5fe3dbf18d33ae7f3b34812928d1c11e
Channel
0
74
404
403
2013-01-12T17:09:49Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.<br><br>
''Note:'' By default (no further type information) all channel communication is blocking using standard send.<br>
''Note:'' If no allocation information is specified with the channel type then the underlying variable will not be assigned any memory - it is instead an abstract connection in this case.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2 {
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
2356cbe19c5c201ece1bc4cf3d3dee46f308193d
405
404
2013-01-12T17:50:21Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.<br><br>
''Note:'' By default (no further type information) all channel communication is blocking using standard send.<br>
''Note:'' If no allocation information is specified with the channel type then the underlying variable will not be assigned any memory - it is instead an abstract connection in this case.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2 {
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
6879bf88857e9379a53e7e937440ec85858e78b1
Onesided
0
76
414
413
2013-01-12T17:18:53Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
0f865cc1dc0f7b32374186fb9bd944fa9138e231
Template:ReductionOperations
10
188
1035
2013-01-12T17:26:26Z
Polas
1
Created page with '{| border="1" cellspacing="0" cellpadding="5" align="center" ! Operator ! Description |- | max | Identify the maximum value |- | min | Identify the minimum value |- | sum | Sum …'
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Operator
! Description
|-
| max
| Identify the maximum value
|-
| min
| Identify the minimum value
|-
| sum
| Sum all the values together
|-
| prod
| Generate product of all values
|}
4108a5916a7c446a74f69d6ae0d01eead98569ca
1036
1035
2013-01-12T17:27:13Z
Polas
1
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="left"
! Operator
! Description
|-
| max
| Identify the maximum value
|-
| min
| Identify the minimum value
|-
| sum
| Sum all the values together
|-
| prod
| Generate product of all values
|}
2af12cb1ab4f0b0538c77b96fec83ff7e9ffac5c
Reduce
0
77
421
420
2013-01-12T17:26:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Supported operations ==
{{ Template:ReductionOperations }}
== Example ==
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
99991f1e0c265d39029b078825d75cde9bac6146
422
421
2013-01-12T17:27:02Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Example ==
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
08b7654b7570040806cac94e0e09d723ddf89b7d
Broadcast
0
78
428
427
2013-01-12T17:27:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(a::broadcast[2]):=23;
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
7d8e837a29a3627cf760ee4839f112a400a1b206
429
428
2013-01-12T17:50:02Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(a::broadcast[2]):=23;
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
b11e4366b7f85617c6b47abd28e497703aef4644
Allreduce
0
82
449
448
2013-01-12T17:30:04Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::allreduce["min"]):=p;
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Primitive Communication Types]]
57d5dcffe929b05b9a8b86514bd7b26b36de9d36
450
449
2013-01-12T17:48:59Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::allreduce["min"]):=p;
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
68489f1c2fcc282ad92eb8fd59e6284ed7f79759
Buffered
0
87
480
479
2013-01-12T17:33:25Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Communication Mode Types]]
0b4bb0d457862eab9ed20425465bc5a12ea26a1e
481
480
2013-01-12T17:50:10Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
9e20ae66239ebafd9645297d300f18d530c3f274
Evendist
0
95
524
523
2013-01-12T17:35:07Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3
{
var q:=b[p][2][3];
var r:=a[p][2][3];
var s:=b :: horizontal[][p][2][3];
};
a:=e;
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Distribution Types]]
1b0821bbb9ae98451f0269b6977fb21cdb2ffbbe
Record
0
96
530
529
2013-01-12T17:42:27Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
== Example ==
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Composition Types]]
d3e1c44a21b7f95785d448c407e894586e5eb5c2
531
530
2013-01-12T17:45:22Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
# [[allocated]]
# [[multiple]]
# [[stack]]
# [[onesided]]
== Example ==
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Composition Types]]
df2807f0282993c18ee61808c20cbf29d6efad3b
532
531
2013-01-12T17:45:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
== Example ==
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Composition Types]]
5f4317fa1d46370251271d8093c607c59a2742a5
Template:ElementDefaultTypes
10
183
1012
1011
2013-01-12T17:45:52Z
Polas
1
wikitext
text/x-wiki
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
054b8a87e6d2346be0d60a15229cccf84f0b88f5
Referencerecord
0
97
539
538
2013-01-12T17:46:22Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[heap]]
''Currently communication is not available for reference records, this will be fixed at some point in the future.''
== Example ==
#include <io>
#include <string>
typevar node;
node::=referencerecord["prev",node,"Int",data,"next",node];
var head:node;
head:=null;
var i;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null) {
print(itostring(head.data)+"\n");
head:=head.next;
};
In this code example a doubly linked list is created, and then its contents read node by node.
[[Category:Type Library]]
[[Category:Composite Types]]
[[Category:Composition Types]]
827da1a68bcccfc514e2c117a361370a849b970a
Category:Compound Types
14
189
1038
2013-01-12T17:48:27Z
Polas
1
Created page with '[[Category:Type Library]]'
wikitext
text/x-wiki
[[Category:Type Library]]
59080a51ca9983880b93aaf73676382c72785431
Allocated
0
62
332
331
2013-01-12T17:48:46Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allocated[type]
Where ''type'' is optional
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
var i: Int :: allocated[];
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
c032e11f2f937bc6dcf599c6e5e00db34dcf61e3
Alltoall
0
81
444
443
2013-01-12T17:49:10Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
alltoall[elementsoneach]
== Semantics ==
Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in the group.
== Example ==
x:array[Int,12]::allocated[multiple[]];
var r:array[Int,3]::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x:alltoall[3]):=r;
};
In this example each process sends every other process three elements (the elements in its ''r''.) Therefore each process ends up with twelve elements in ''x'', the location of each is based on the source processes's PID.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
ee18d69edec194c6eeb4f63b1e0e0b7f53da1833
Async
0
83
456
455
2013-01-12T17:49:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2
{
a:=89;
var q:=20;
q:=a;
sync q;
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
7c3b2f52ef0f660a934da43958f10c6e4caa01c0
Blocking
0
84
462
461
2013-01-12T17:49:49Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
d712891ebf9b2499f88e2f08d885483508eef391
Const
0
66
355
354
2013-01-12T17:50:51Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
const[ ]
== Semantics ==
Enforces the read only property of a variable.
== Example ==
var a:Int;
a:=34;
a:(a :: const[]);
a:=33;
The code in the above example will produce an error. Whilst the first assignment (''a:=34'') is legal, on the subsequent line the programmer has modified the type of ''a'' to be that of ''a'' combined with the type ''const''. The second assignment is attempting the modify a now read only variable and will fail.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
f59c58161a352e995d321e68ba55be6721f47a8e
Evendist
0
95
525
524
2013-01-12T17:51:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3
{
var q:=b[p][2][3];
var r:=a[p][2][3];
var s:=b :: horizontal[][p][2][3];
};
a:=e;
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
15fdfb34610173d804c756fb76ac948d6270879e
Extern
0
69
370
369
2013-01-12T17:51:51Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
extern[]
== Semantics ==
Provided as additional allocation type information, this tells the compiler NOT to allocate memory for the variable as this has been already done externally.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
c02a0a040077162e75f5563fb8c51d0e31643e40
Gather
0
79
434
433
2013-01-12T17:52:06Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
gather[elements,root]
== Semantics ==
Gather a number of elements (equal to ''elements'') from each process and send these to the root process.
== Example ==
var x:array[Int,12] :: allocated[single[on[2]]];
var r:array[Int,3] :: allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::gather[3,2]):=r;
};
In this example, the variable ''x'' is allocated on the root process (2) only. Whereas ''r'' is allocated on all processes. In the assignment all three elements of ''r'' are gathered from each process and sent to the root process (2) and then placed into variable ''x'' in the order defined by the source's PID.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
e561251042dbc200ca2b4118ae829cacb3bbd80f
Heap
0
185
1022
1021
2013-01-12T17:52:24Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
var i:Int :: allocated[heap];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
87f94f308c2909ae7dd864d3cae472cf1fbe8d53
Horizontal
0
90
504
503
2013-01-12T17:52:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
0931fb3619108d18274f22416f87480b311d99c3
Multiple
0
63
337
336
2013-01-12T17:52:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope.
== Example ==
var i: Int :: allocated[multiple[]];
In this example the variable ''i'' is an integer, allocated to all processes.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
1ca91c93566f505fd3173cca8ac72b2ee0e7c217
Nonblocking
0
85
468
467
2013-01-12T17:53:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
c8a8a7772684c341930e4a7fd30e909668eda2d6
Onesided
0
76
415
414
2013-01-12T17:53:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
ff0751694063cade4353350192112c6ec6d9e1a7
Pipe
0
75
410
409
2013-01-12T17:53:52Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
pipe[a,b]
== Semantics ==
Identical to the [[Channel]] type, except pipe is bidirectional rather than unidirectional
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
4f15cdd522d9b79b106dba65bd1d3d22b7955430
Ready
0
88
488
487
2013-01-12T17:54:05Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
6a415f5b1dad9f6e0699e005b60be9c28ab6ddea
Record
0
96
533
532
2013-01-12T17:54:18Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
== Example ==
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
4a28d41d708e48475115e678218043105a493742
Reduce
0
77
423
422
2013-01-12T17:54:28Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Example ==
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
8aec5f0589a2cbe19a18b58d1ae8c174617510ce
Referencerecord
0
97
540
539
2013-01-12T17:54:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[heap]]
''Currently communication is not available for reference records, this will be fixed at some point in the future.''
== Example ==
#include <io>
#include <string>
typevar node;
node::=referencerecord["prev",node,"Int",data,"next",node];
var head:node;
head:=null;
var i;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null) {
print(itostring(head.data)+"\n");
head:=head.next;
};
In this code example a doubly linked list is created, and then its contents read node by node.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
cc6e5146e50e7b40149e59192bb27a160acb0ad6
Row
0
72
393
392
2013-01-12T17:54:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In row major allocation the first dimension is the most major and the last most minor.
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
e5e698ca8e4a923a8794b07b7575ec9923ba65c2
Scatter
0
80
439
438
2013-01-12T17:55:06Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
scatter[elements,root]
== Semantics ==
Will send a number of elements (equal to ''elements'') from the root process to all other processes.
== Example ==
var x:array[Int,3]::allocated[multiple[]];
var r:array[Int,12]::allocated[multiple[]];
var p;
par p from 0 to 3
{
x:(x::scatter[3,1]);
x:=r;
};
In this example, three elements of array ''r'', on process 1, are scattered to each other process and placed in their copy of ''r''.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
3154b7849bbb3e2b3921cb875741d9bc8d1f268a
Share
0
68
364
363
2013-01-12T17:55:19Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
share[name]
== Semantics ==
This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the memory of that specified as arguments to the type.) This is very useful in HPC applications as often processes are running at the limit of their resources. The type will share memory with that of the variable ''name'' in the above syntax. In order to keep this type safe, the sharee must be smaller than or of equal size to the memory chunk, this is error checked.
== Example ==
var a:Int::allocated[multiple[]];
var c:Int::allocated[multiple[] :: share[a]];
var e:array[Int,10]::allocated[single[on[1]]];
var u:array[Char,12]::allocated[single[on[1]] :: share[e]];
In the example above, the variables ''a'' and ''c'' will share the same memory. The variables ''e'' and ''u'' will also share the same memory. There is some potential concern that this might result in an error - as the size of ''u'' array is 12, and size of ''e'' array is only 10. If the two arrays have different types then this size will be checked dynamically - as an int is 32 bit and a char only 8 then this sharing of data would work in this case.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
0f9918ed4450c0b815e65d4880f47704b976212a
Single
0
65
350
349
2013-01-12T17:55:31Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
single[type]
single[on[process]]
where ''type'' is optional
== Semantics ==
Will allocate a variable to a specific process. Most commonly combined with the ''on'' type which specifies the process to allocated to, but not required if this can be inferred. Additionally the programmer will place a distribution type within ''single'' if dealing with distributed arrays.
== Example ==
var i:Int :: allocated[single[on[1]]];
In this example variable ''i'' is declared as an integer and allocated on process 1.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
4ddd3e01e9adaaed6cf939ef5c9c9873b9c5a90a
Stack
0
184
1016
1015
2013-01-12T17:55:49Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[stack];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
7377d3becdced0849927428e301b2317be0d471d
Standard
0
86
474
473
2013-01-12T17:56:05Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
5f58456dc2053d3bd4c17dbc72dd96e9b46af5f3
Static
0
186
1029
1028
2013-01-12T17:56:15Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[static];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
ddcf6a3bed6784b208fb3ad5c9af36ceca9f1d48
Synchronous
0
89
494
493
2013-01-12T17:56:34Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
92a2b26d792419f9fa481dec05999df05aebe59d
Tempmem
0
67
360
359
2013-01-12T17:56:47Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
tempmem[ ]
== Semantics ==
Used to inform the compiler that the programmer is happy that a call (usually communication) will use temporary memory. Some calls can not function without this and will give an error, others will work more efficiently with temporary memory but can operate without at a performance cost. This type is provided because often memory is at a premium, with applications running towards at their limit. It is therefore useful for the programmer to indicate whether or not using extra, temporary, memory is allowed.
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
c92e44013e114f54bda029d53bbf83dd4edfdf88
Vertical
0
91
513
512
2013-01-12T17:56:57Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically. The figure below illustrates partitioning an array into 4 blocks vertically.
<center>[[Image:vert.jpg|Vertical Partition of an array into four blocks via type oriented programming]]</center>
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
9ca59e1b6195a8b9690108977b2d7e8bec797383
Category:Allocation Types
14
55
307
306
2013-01-12T17:57:11Z
Polas
1
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Attribute Types
14
54
304
303
2013-01-12T17:57:27Z
Polas
1
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Collection Types
14
56
310
309
2013-01-12T17:57:37Z
Polas
1
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Communication Mode Types
14
58
318
317
2013-01-12T17:57:50Z
Polas
1
wikitext
text/x-wiki
By default, communication in Mesham is blocking (i.e. will not continue until a send or receive has completed.) Standard sends will complete either when the message has been sent to the target processor or when it has been copied into a buffer, on the source machine, ready for sending. In most situations the standard send is the most efficient, however in some specialist situations more performance can be gained by overriding this.
By providing these communication mode types illustrates a powerful aspect of type based parallelism. The programmer can use the default communication method initially and then, to fine tune their code, simply add extra types to experiment with the performance of these different communication options.
[[Category:Compound Types]]
3d0877f21ad8c741348088de810ac5a594bb092a
Category:Composition Types
14
61
328
327
2013-01-12T17:58:02Z
Polas
1
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Distribution Types
14
60
325
324
2013-01-12T17:58:12Z
Polas
1
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Partition Types
14
59
322
321
2013-01-12T17:58:38Z
Polas
1
wikitext
text/x-wiki
Often in data parallel HPC applications the programmer wishes to split up data in some way, shape or form. This is often a difficult task, as the programmer must consider issues such as synchronisation and uneven distributions. Mesham provides types to allow for the partitioning and distribution of data, the programmer needs just to specify the correct type and then behind the scenes the compiler will deal with all the complexity via the type system. It has been found that this approach works well, not just because it simplifies the program, but also because some of the (reusable) codes associated with parallelization types are designed beforehand by expert system programmers. These types tend be better optimized by experts than the codes written directly by the end programmers.
When the programmer partitions data, the compiler splits it up into blocks (an internal type of the compiler.) The location of these blocks depends on the distribution type used - it is possible for all the blocks to be located on one process, on a few or on all and if there are more blocks than processes they can always ``wrap around.'' The whole idea is that the programmer can refer to separate blocks without needing to worry about exactly where they are located, this means that it's very easy to change the distribution method to something more efficient later down the line if required.
The programmer can think of two types of partitioning - partitioning for distribution and partitioning for viewing. The partition type located inside the allocated type is the partition for distribution (and also the default view of the data.) However, if the programmer wishes to change the way they are viewing the blocks of data, then a different partition type can be coerced. This will modify the view of the data, but NOT the underlying way that the data is allocated and distributed amongst the processes. Of course, it is important to avoid an ambiguous combination of partition types. In order to access a certain block of a partition, simply use array access [ ] i.e. ''a[3]'' will access the 3rd block of variable a.
In the code ''var a:array[Int,10,20] :: allocated[A[m] :: single[D[]]]'', the variable ''a'' is declared to be a 2d array size 10 by 20, using partition type A and splitting the data into ''m'' blocks. These blocks are distributed amongst the processes via distribution method ''D''.
In the code fragment ''a:(a::B[])'', the partition type ''B'' is coerced with the type of variable ''a'', and the view of the data changes from that of ''A'' to 'B''.
[[Category:Compound Types]]
66eaf2d4c0434d9b6720a800483533a10b2f3796
Category:Primitive Communication Types
14
57
314
313
2013-01-12T17:58:52Z
Polas
1
wikitext
text/x-wiki
Primitive communication types ensure that all, safe, forms of communication supported by MPI can also be represented in Mesham. However, unlike the shared variable approach adopted elsewhere, when using primitive communication the programmer is responsible for ensuring communications complete and match up.
[[Category:Compound Types]]
5d0ec50f91cba0c362a1408df596fd93896dfa14
Template:Documentation
10
14
89
88
2013-01-12T17:59:39Z
Polas
1
wikitext
text/x-wiki
*[[Introduction]]
**[[The Compiler]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types|Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Functions]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Compound Types|Compound Types]]
*[[:Category:Function Library|Function Library]]
3483b754ff50c48b8c86563d33f6838faa7a1841
Category:Maths Functions
14
103
572
571
2013-01-13T12:31:18Z
Polas
1
wikitext
text/x-wiki
The functionality in this library is available by preprocessor including ''<maths>''
[[Category:Function Library]]
398a15e1bea4c1e5eb5a6422ee37a9a9033f6772
Cos
0
108
588
587
2013-01-13T12:31:57Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos[d] function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find cosine of
* '''Returns:''' A double representing the cosine
== Example ==
var a:=cos[10];
var y:=cos[a]
[[Category:Function Library]]
[[Category:Maths Functions]]
00d3897077ea2cb3b8990844b2f498ac7b0405a0
589
588
2013-01-13T12:33:05Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos[d] function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find cosine of
* '''Returns:''' A double representing the cosine
== Example ==
var a:=cos(10.4);
var y:=cos(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
0ccaf78c1c0df4ff94299a2d41600f93729e3738
590
589
2013-01-13T12:38:28Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos(d) function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find cosine of
* '''Returns:''' A double representing the cosine
== Example ==
var a:=cos(10.4);
var y:=cos(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
0169ceb11451e36a063719a98aec671d62f3d80c
Sin
0
190
1040
2013-01-13T12:32:50Z
Polas
1
Created page with '== Overview == This sin[d] function will find the sine of the value or variable ''d'' passed to it. * '''Pass:''' A double to find sine of * '''Returns:''' A double representi…'
wikitext
text/x-wiki
== Overview ==
This sin[d] function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find sine of
* '''Returns:''' A double representing the sine
== Example ==
var a:=sin(98.54);
var y:=sin(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
8ba5f9857ea4ad05d2ed5abaf99fd3c56269192d
1041
1040
2013-01-13T12:38:40Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sin(d) function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find sine of
* '''Returns:''' A double representing the sine
== Example ==
var a:=sin(98.54);
var y:=sin(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
3b5840bc58be95c06356233f033ad58c17caf970
Tan
0
191
1046
2013-01-13T12:33:51Z
Polas
1
Created page with '== Overview == This tan[d] function will find the tangent of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the tangent of * '''Returns:''' A double …'
wikitext
text/x-wiki
== Overview ==
This tan[d] function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the tangent of
* '''Returns:''' A double representing the tangent
== Example ==
var a:=tan(0.05);
var y:=tan(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
c4bb1b86530b41dadb9c42a056cfcb6ce06bc27d
1047
1046
2013-01-13T12:38:53Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This tan(d) function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the tangent of
* '''Returns:''' A double representing the tangent
== Example ==
var a:=tan(0.05);
var y:=tan(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
9b15ce48f2d4ef0272eb458a0c419016146628a5
Acos
0
192
1052
2013-01-13T12:38:06Z
Polas
1
Created page with '== Overview == The acos[d] function will find the inverse cosine of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the inverse cosine of * '''Returns…'
wikitext
text/x-wiki
== Overview ==
The acos[d] function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse cosine of
* '''Returns:''' A double representing the inverse cosine
== Example ==
var d:=acos(10.4);
var y:=acos(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
f899d703e773351b1a17be1c6289dcd4ca2daf6e
1053
1052
2013-01-13T12:39:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse cosine of
* '''Returns:''' A double representing the inverse cosine
== Example ==
var d:=acos(10.4);
var y:=acos(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
3420702ae307c9b22eb7b550467e55c2e230282b
1054
1053
2013-01-13T12:45:01Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse cosine of
* '''Returns:''' A double representing the inverse cosine
== Example ==
#include <maths>
var d:=acos(10.4);
var y:=acos(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
f9bab6b17b930a4451098a2d2ae803a0567fc5b0
Asin
0
193
1059
2013-01-13T12:39:45Z
Polas
1
Created page with '== Overview == The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the inverse sine of * '''Returns:'''…'
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse sine of
* '''Returns:''' A double representing the inverse sine
== Example ==
var d:=asin(23);
var y:=asin(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
aa861a475d6d1ef699f628dc2922269b0e5b9687
1060
1059
2013-01-13T12:45:12Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse sine of
* '''Returns:''' A double representing the inverse sine
== Example ==
#include <maths>
var d:=asin(23);
var y:=asin(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
45448120f3e5188bada00ef4fb5e408c0fe8d3ba
Atan
0
194
1065
2013-01-13T12:40:17Z
Polas
1
Created page with '== Overview == The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the inverse tangent of * '''Retur…'
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse tangent of
* '''Returns:''' A double representing the inverse tangent
== Example ==
var d:=atan(876.3);
var y:=atan(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
5adeacc2cc05c530f9e8898141bd3c0b884f3f18
Cosh
0
195
1071
2013-01-13T12:40:50Z
Polas
1
Created page with '== Overview == The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the hyperbolic cosine of * '''R…'
wikitext
text/x-wiki
== Overview ==
The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic cosine of
* '''Returns:''' A double representing the hyperbolic cosine
== Example ==
var d:=cosh(10.4);
var y:=cosh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
4eb98885132524a10e3823313e5e07a61466a124
Sinh
0
196
1076
2013-01-13T12:42:00Z
Polas
1
Created page with '== Overview == The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the hyperbolic sine of * '''Retur…'
wikitext
text/x-wiki
== Overview ==
The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic sine of
* '''Returns:''' A double representing the hyperbolic sine
== Example ==
var d:=sinh(0.4);
var y:=sinh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
2ea2160bb5c1202841d7b0136f8c34588ef6737f
Tanh
0
197
1081
2013-01-13T12:43:51Z
Polas
1
Created page with '== Overview == The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it. * '''Pass:''' A double to find the hyperbolic tangent of * ''…'
wikitext
text/x-wiki
== Overview ==
The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic tangent of
* '''Returns:''' A double representing the hyperbolic tangent
== Example ==
var d:=tanh(10.4);
var y:=tanh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
5d5b4aedd8afd479062114734a619d4e05776980
Floor
0
109
596
595
2013-01-13T12:44:48Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This floor(d) function will find the largest integer less than or equal to ''d''.
* '''Pass:''' A double to find floor of
* '''Returns:''' An integer representing the floor
== Example ==
#include <maths>
var a:=floor(10.5);
var y:=floor(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
011c97e59a05ad5d2583f8fea06b48846b28e8bf
Atan
0
194
1066
1065
2013-01-13T12:45:21Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the inverse tangent of
* '''Returns:''' A double representing the inverse tangent
== Example ==
#include <maths>
var d:=atan(876.3);
var y:=atan(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
93dd796d1f841521f11434eff616d434a44e4121
Cos
0
108
591
590
2013-01-13T12:45:32Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos(d) function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find cosine of
* '''Returns:''' A double representing the cosine
== Example ==
#include <maths>
var a:=cos(10.4);
var y:=cos(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
23020329a85707fd7c26035aad6bffc53ed627b0
Cosh
0
195
1072
1071
2013-01-13T12:45:42Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic cosine of
* '''Returns:''' A double representing the hyperbolic cosine
== Example ==
#include <maths>
var d:=cosh(10.4);
var y:=cosh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
3616373daa12397462c55e3dee465af81786df3e
Sin
0
190
1042
1041
2013-01-13T12:45:55Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sin(d) function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find sine of
* '''Returns:''' A double representing the sine
== Example ==
#include <maths>
var a:=sin(98.54);
var y:=sin(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
cbd64670fab1b949ee9469c5c790253904814884
Sinh
0
196
1077
1076
2013-01-13T12:46:05Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic sine of
* '''Returns:''' A double representing the hyperbolic sine
== Example ==
#include <maths>
var d:=sinh(0.4);
var y:=sinh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
cff004c3a48d1c6f6f274fd304bb8191d0d89605
Tan
0
191
1048
1047
2013-01-13T12:46:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This tan(d) function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the tangent of
* '''Returns:''' A double representing the tangent
== Example ==
#include <maths>
var a:=tan(0.05);
var y:=tan(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
86f473e92f36f9da1de9ad852db52630cbc0c66d
Tanh
0
197
1082
1081
2013-01-13T12:46:23Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A double to find the hyperbolic tangent of
* '''Returns:''' A double representing the hyperbolic tangent
== Example ==
#include <maths>
var d:=tanh(10.4);
var y:=tanh(d);
[[Category:Function Library]]
[[Category:Maths Functions]]
d07d0aeb0fc1b72ed4ee06255e47484a55559e29
Ceil
0
198
1086
2013-01-13T12:47:24Z
Polas
1
Created page with '== Overview == This ceil(d) function will find the smallest integer greater than or equal to ''d''. * '''Pass:''' A double to find the ceil of * '''Returns:''' An integer repres…'
wikitext
text/x-wiki
== Overview ==
This ceil(d) function will find the smallest integer greater than or equal to ''d''.
* '''Pass:''' A double to find the ceil of
* '''Returns:''' An integer representing the ceiling
== Example ==
#include <maths>
var a:=ceil(10.5);
var y:=ceil(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
e2b5ca08f5f04ca1b59fb345e6bba5ea749ce897
Getprime
0
110
601
600
2013-01-13T12:48:03Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This getprime(n) function will find the ''n''th prime number.
* '''Pass:''' An integer
* '''Returns:''' An integer representing the prime
== Example ==
#include <maths>
var a:=getprime(10);
var y:=getprime(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
9083c8a9725fe66cb2fb84d0c29389e854411b4b
Log
0
111
607
606
2013-01-13T12:50:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the logarithmic value of ''d''
* '''Pass:''' A double
* '''Returns:''' A double representing the logarithmic value
== Example ==
#include <maths>
var a:=log(10.54);
var y:=log(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
c0a6686c61539e857c7e9292b10ac2c968cc5a15
608
607
2013-01-13T12:50:58Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the natural logarithmic value of ''d''
* '''Pass:''' A double
* '''Returns:''' A double representing the logarithmic value
== Example ==
#include <maths>
var a:=log(10.54);
var y:=log(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
5ad2b145c36246309a93faf61937412eb4f2b89e
Log10
0
199
1091
2013-01-13T12:51:24Z
Polas
1
Created page with '== Overview == This log(d) function will find the base 10 logarithmic value of ''d'' * '''Pass:''' A double * '''Returns:''' A double representing the base 10 logarithmic value…'
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the base 10 logarithmic value of ''d''
* '''Pass:''' A double
* '''Returns:''' A double representing the base 10 logarithmic value
== Example ==
#include <maths>
var a:=log10(0.154);
var y:=log10(a);
[[Category:Function Library]]
[[Category:Maths Functions]]
845fdab904e1bd8a0cda02afde3037f682a83779
Mod
0
112
613
612
2013-01-13T12:52:02Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This mod(n,x) function will divide ''n'' by ''x'' and return the remainder.
* '''Pass:''' Two integers
* '''Returns:''' An integer representing the remainder
== Example ==
#include <maths>
var a:=mod(7,2);
var y:=mod(a,a);
[[Category:Function Library]]
[[Category:Maths Functions]]
275e272d7b8576705a0f12e89f654c82ba17bc1d
PI
0
113
618
617
2013-01-13T12:52:23Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pi() function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A double representing PI
== Example ==
var a:=pi();
[[Category:Function Library]]
[[Category:Maths Functions]]
f7fe89662f3e2c87a08cf31ed86ce903864878ac
619
618
2013-01-13T12:55:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pi() function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A double representing PI
== Example ==
#include <maths>
var a:=pi();
[[Category:Function Library]]
[[Category:Maths Functions]]
61d56e634322ef8d90bca751d5fdd0f2ecd5b9d5
Pow
0
114
624
623
2013-01-13T12:52:56Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pow(n,x) function will return ''n'' to the power of ''x''.
* '''Pass:''' Two integers
* '''Returns:''' A double representing the squared result
== Example ==
var a:=pow(2,8);
[[Category:Function Library]]
[[Category:Maths Functions]]
79aabac51d700740195ffbb056896ffcb46a8d0d
625
624
2013-01-13T12:55:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pow(n,x) function will return ''n'' to the power of ''x''.
* '''Pass:''' Two integers
* '''Returns:''' A double representing the squared result
== Example ==
#include <maths>
var a:=pow(2,8);
[[Category:Function Library]]
[[Category:Maths Functions]]
ea1d186336d828ac5613304e6cba55442fff5a24
Sqr
0
116
635
634
2013-01-13T12:53:28Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqr(d) function will return the result of squaring ''d''.
* '''Pass:''' A double to square
* '''Returns:''' A double representing the squared result
== Example ==
var a:=sqr(3.45);
[[Category:Function Library]]
[[Category:Maths Functions]]
f81d1e7bbbc3c2ca4c38a9a9127a60607fdff14b
636
635
2013-01-13T12:54:53Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqr(d) function will return the result of squaring ''d''.
* '''Pass:''' A double to square
* '''Returns:''' A double representing the squared result
== Example ==
#include <maths.h>
var a:=sqr(3.45);
[[Category:Function Library]]
[[Category:Maths Functions]]
9d2682c51bb3d662668394babdcc262611c58810
Randomnumber
0
115
630
629
2013-01-13T12:54:32Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This randomnumber(n,x) function will return a random number between ''n'' and ''x''.
''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and in this case a floating point number is found.''
* '''Pass:''' Two integers defining the bounds of the random number
* '''Returns:''' A double representing the random number
== Example ==
#include <maths>
var a:=randomnumber(10,20);
var b:=randomnumber(0,1);
In this case, ''a'' is a whole number between 10 and 20, whereas ''b'' is a decimal number.
[[Category:Function Library]]
[[Category:Maths Functions]]
82dedd3219eb01b1829da179006470a3983416b5
Sqrt
0
117
641
640
2013-01-13T12:56:24Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqrt(d) function will return the result of square rooting ''d''.
* '''Pass:''' An double to find square root of
* '''Returns:''' A double which is the square root
== Example ==
#include <maths>
var a:=sqrt(8.3);
[[Category:Function Library]]
[[Category:Maths Functions]]
e2032698578696b39926491d4ea5caa630c63fd1
Complex
0
200
1095
2013-01-13T13:04:37Z
Polas
1
Created page with '== Overview == The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[…'
wikitext
text/x-wiki
== Overview ==
The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[record]] type with both components as doubles.
== Example ==
#include <maths>
var a:complex;
a.i:=19.65;
a.r:=23.44;
[[Category:Function Library]]
[[Category:Maths Functions]]
e0a0347214a2aa9f4aa5135b397544449dbe609e
Category:IO Functions
14
104
575
574
2013-01-13T13:05:23Z
Polas
1
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<io>''
114f028dc298c3ce8c74bfc0096aaae25564a336
Close
0
201
1100
2013-01-13T13:06:39Z
Polas
1
Created page with '== Overview == The close(f) function will close the file represented by handle ''f'' * '''Pass:''' A file handle of type [[File]] * '''Returns:''' Nothing == Example == #inc…'
wikitext
text/x-wiki
== Overview ==
The close(f) function will close the file represented by handle ''f''
* '''Pass:''' A file handle of type [[File]]
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("myfile.txt","r");
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
3ff7775fc7b26f067a1d94e449b8d0fcd5591e46
Input
0
118
646
645
2013-01-13T13:07:44Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This input(i) function will prompt the user for input via stdin, the result being placed into ''i''
* '''Pass:''' A variable for the input to be written into, of type String
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:String;
input(f);
print("You wrote: "+f+"\n");
[[Category:Function Library]]
[[Category:IO Functions]]
f8e61d4cacb21f233856a745ca88d8c9a9925a37
647
646
2013-01-13T13:09:16Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This input(i) function will prompt the user for input via stdin, the result being placed into ''i''
* '''Pass:''' A variable for the input to be written into, of type [[String]]
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:String;
input(f);
print("You wrote: "+f+"\n");
[[Category:Function Library]]
[[Category:IO Functions]]
df2a5bf3e2ee04360ff5ecd5252eb8f6a954af9e
Open
0
202
1105
2013-01-13T13:08:56Z
Polas
1
Created page with '== Overview == This open(n,a) function will open the file of name ''n'' with mode of ''a''. * '''Pass:''' The name of the file to open of type [[String]] and mode of type [[Str…'
wikitext
text/x-wiki
== Overview ==
This open(n,a) function will open the file of name ''n'' with mode of ''a''.
* '''Pass:''' The name of the file to open of type [[String]] and mode of type [[String]]
* '''Returns:''' A file handle of type [[File]]
== Example ==
#include <io>
var f:=open("myfile.txt","r");
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
6996a3c150ccc016bcf93ef282a76d0f1659123e
Print
0
119
652
651
2013-01-13T13:10:33Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This print(n) function will write a variable of value ''n'' to stdout.
* '''Pass:''' A [[String]] typed variable or value
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:="Hello";
print(f+" world\n");
[[Category:Function Library]]
[[Category:IO Functions]]
5740dd2c2829a2c431e8a7fcb6809c69e7b838e9
Readchar
0
120
657
656
2013-01-13T13:11:30Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readchar(f) function will read a character from a file with handle ''f''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read character from
* '''Returns:''' A character from the file type [[Char]]
== Example ==
#include <io>
var f:=open("hello.txt","r");
var u:=readchar(f);
close(a);
[[Category:Function Library]]
[[Category:IO Functions]]
949813d057ec25d72d9eea7d568e2b5da06a0bb5
658
657
2013-01-13T13:13:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readchar(f) function will read a character from a file with handle ''f''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read character from
* '''Returns:''' A character from the file type [[Char]]
== Example ==
#include <io>
var f:=open("hello.txt","r");
var u:=readchar(f);
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
b595e6c472b0c50c76a34486cbe1815f6f7c99b7
Readline
0
121
663
662
2013-01-13T13:12:50Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readline(f) function will read a line (delimited by the new line character) from a file with handle ''f''. The file handle maintains its position in the file, so after a call to readline the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read the line from
* '''Returns:''' A line of the file type [[String]]
== Example ==
#include <io>
var f:=open("hello.txt","r");
var u:=readline(f);
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
cb88c2972d85da5a56d0206b04527a41655fe970
Writestring
0
203
1109
2013-01-13T13:15:07Z
Polas
1
Created page with '== Overview == This writesmring[f,a] function will write the value of ''a'' to the file denoted by handle ''f''. * '''Pass:''' The [[File]] handle to write to and the [[String]…'
wikitext
text/x-wiki
== Overview ==
This writesmring[f,a] function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[String]] to write
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("hello.txt","w");
writestring(f,"hello - test");
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
f7a0cf26dc109262d42e986b40fda8882e1afa58
1110
1109
2013-01-13T13:15:39Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This writestring(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[String]] to write
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("hello.txt","w");
writestring(f,"hello - test");
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
b599759efa10753ea2b0b165e700376e5c0154da
Writebinary
0
204
1114
2013-01-13T13:16:34Z
Polas
1
Created page with '== Overview == This writebinary(f,a) function will write the value of ''a'' to the file denoted by handle ''f''. * '''Pass:''' The [[File]] handle to write to and the [[Int]] v…'
wikitext
text/x-wiki
== Overview ==
This writebinary(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[Int]] variable or value to write into the file in a binary manner
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("hello.txt","w");
writebinary(f,127);
close(f);
[[Category:Function Library]]
[[Category:IO Functions]]
cd965f4abb488f5d69dadeeebbc3c0bf66076afa
Category:Parallel Functions
14
105
578
577
2013-01-13T13:16:57Z
Polas
1
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<parallel>''
e3a19810ea868f3a545857d358b62aca2dd45d89
Pid
0
122
668
667
2013-01-13T13:17:16Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pid() function will return the current processes' ID number.
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the current process ID
== Example ==
var a:=pid();
[[Category:Function Library]]
[[Category:Parallel Functions]]
5523daa2c514d17f9afe31d72d10a2e2cf1b336f
Processes
0
123
673
672
2013-01-13T13:17:33Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This processes() function will return the number of processes
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the number of processes
== Example ==
var a:=processes();
[[Category:Function Library]]
[[Category:Parallel Functions]]
a5d1d7f99c96bdbc9008277b86e1af69baaabe2d
Category:String Functions
14
106
581
580
2013-01-13T13:17:52Z
Polas
1
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<string>''
8b69af5a50dbf837cefe04f7fcf466f3a50ddb76
Charat
0
124
678
677
2013-01-13T13:18:49Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This charat(s,n) function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A [[String]] and [[Int]]
* '''Returns:''' A [[Char]]
== Example ==
#include <string>
var a:="hello";
var c:=charat(a,2);
var d:=charat("test",0);
[[Category:Function Library]]
[[Category:String Functions]]
d5afaa49898f23ebbe3dfb7adf2c0b0a2d490fbe
Lowercase
0
125
684
683
2013-01-13T13:19:41Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This lowercase(s) function will return the lower case result of string or character ''s''.
* '''Pass:''' A [[String]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:="HeLlO";
var c:=lowercase(a);
var d:=lowercase("TeST");
[[Category:Function Library]]
[[Category:String Functions]]
2410182eada48229c3af0147da734315cb09ffff
685
684
2013-01-13T13:19:51Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This lowercase(s) function will return the lower case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:="HeLlO";
var c:=lowercase(a);
var d:=lowercase("TeST");
[[Category:Function Library]]
[[Category:String Functions]]
7b38a109610cec3d03d772ac2f7771d190ef7f01
Strlen
0
126
690
689
2013-01-13T13:21:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This strlen(s) function will return the length of string ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
var a:="hello";
var c:=strlen(a);
[[Category:Function Library]]
[[Category:String Functions]]
043f47639baf0c74795fd49d96caf3c31354fe53
Substring
0
127
695
694
2013-01-13T13:22:20Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This substring(s,n,x) function will return the string at the position between ''n'' and ''x'' of ''s''.
* '''Pass:''' A [[String]] and two [[Int|Ints]]
* '''Returns:''' A [[String]] which is a subset of the string passed into it
== Example ==
#include <string>
var a:="hello";
var c:=substring(a,2,4);
[[Category:Function Library]]
[[Category:String Functions]]
b304cf49197dced85ff1fd7a11f60947c5d15bc2
Toint
0
128
700
699
2013-01-13T13:23:53Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This toint(s) function will convert the string ''s'' into an integer.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
var a:="234";
var c:=toint(a);
[[Category:Function Library]]
[[Category:String Functions]]
e424a6b12a2e9d4b30c38e19c0ebd9805ab15fe3
Itostring
0
205
1118
2013-01-13T13:25:22Z
Polas
1
Created page with '== Overview == The itostring(n) function will convert the variable or value ''n'' into a string. * '''Pass:''' An [[Int]] * '''Returns:''' A [[String]] == Example == #includ…'
wikitext
text/x-wiki
== Overview ==
The itostring(n) function will convert the variable or value ''n'' into a string.
* '''Pass:''' An [[Int]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:=234;
var c:=itostring(a);
[[Category:Function Library]]
[[Category:String Functions]]
35b7e5b25bfe71c55463e0b51d836b86a8afcb23
Dtostring
0
206
1122
2013-01-13T13:26:19Z
Polas
1
Created page with '== Overview == The dtostring(d, a) function will convert the variable or value ''d'' into a string using the formatting supplied in ''a''. * '''Pass:''' A [[Double]] and [[Stri…'
wikitext
text/x-wiki
== Overview ==
The dtostring(d, a) function will convert the variable or value ''d'' into a string using the formatting supplied in ''a''.
* '''Pass:''' A [[Double]] and [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:=23.4352;
var c:=dtostring(a, "%.2f");
[[Category:Function Library]]
[[Category:String Functions]]
4e22f81ca6fee2a682bfc8aaee7e3444200b0c18
Uppercase
0
129
705
704
2013-01-13T13:26:56Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This uppercase(s) function will return the upper case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:="HeLlO";
var c:=uppercase(a);
[[Category:Function Library]]
[[Category:String Functions]]
b192e2ce71df87be3c80f32b29bf033874bb75e3
Category:System Functions
14
107
584
583
2013-01-13T13:27:14Z
Polas
1
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<system>''
71eac3e1c287cdd004d63d44ae0305abf1ba8bde
Getepoch
0
207
1126
2013-01-13T13:28:39Z
Polas
1
Created page with '== Overview == This getepoch() function will return the number of milliseconds since the epoch (1st January 1970). * '''Pass:''' Nothing * '''Returns:''' [[Long]] containing th…'
wikitext
text/x-wiki
== Overview ==
This getepoch() function will return the number of milliseconds since the epoch (1st January 1970).
* '''Pass:''' Nothing
* '''Returns:''' [[Long]] containing the number of milliseconds
[[Category:Function Library]]
[[Category:System Functions]]
555691a9d8cc05447c1c94f675d4248975602628
Displaytime
0
130
710
709
2013-01-13T13:28:56Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This displaytime() function will display the timing results recorded by the function [[recordtime]] along with the process ID. This is very useful for debugging or performance testing.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
f946e565e91cd73f813c48144e46df0e684c0845
Recordtime
0
131
714
713
2013-01-13T13:29:11Z
Polas
1
wikitext
text/x-wiki
This recordtime() function record the current (wall clock) execution time upon reaching that point. This is useful for debugging or performance testing, the time records can be displayed via the [[displaytime]] function.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
a6fbacd04b0499bc8425c2f5d42e63456042edbd
Gc
0
208
1129
2013-01-13T13:30:28Z
Polas
1
Created page with '== Overview == The gc() function will collect any garbage memory. Memory allocated via the [[Heap]] type is subject to garbage collection, which will occur automatically during …'
wikitext
text/x-wiki
== Overview ==
The gc() function will collect any garbage memory. Memory allocated via the [[Heap]] type is subject to garbage collection, which will occur automatically during program execution but can be invoked manually via this function call.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
875ff0a6f0e475ac333651fe22b795246dfdad41
Exit
0
132
718
717
2013-01-13T13:30:44Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This exit() function will cease program execution and return to the operating system. From an implementation point of view, this will return ''EXIT_SUCCESS'' to the OS.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
[[Category:Function Library]]
[[Category:System Functions]]
0b5d6c08e2539e3d0db1989f929be039236f0973
Oscli
0
133
722
721
2013-01-13T13:33:15Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This oscli(a) function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution.
* '''Pass:''' A [[String]] representing the command
* '''Returns:''' Nothing
* '''Throws:''' The error string ''oscli'' if the operating system returns an error to this call
== Example ==
#include <io>
#include <system>
var a:String;
input(a);
try {
oscli(a);
} catch ("oscli") {
print("Error in executing command\n");
};
The above program is a simple interface, allowing the user to input a command and then passing this to the OS for execution. The ''oscli'' call is wrapped in a try-catch block which will detect when the user has request the run of an errornous command, this explicit error handling is entirely optional.
[[Category:Function Library]]
[[Category:System Functions]]
b3f1f0f9036bc309e8b4282096b468ed956e5e7d
Mesham
0
5
21
20
2013-01-13T14:57:16Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction}}
{{Box|subject= Downloads}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
<!-- Second column -->
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation}}
{{Box|subject= Examples}}
|}
099806a466e43862644e369c3b91c3b9e6c0a59f
22
21
2013-01-13T15:06:16Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 25%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= Introduction|title= Quick start}}
{{Box|subject= Downloads|title= Downloads}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
<!-- Second column -->
| style="padding: 0 0 0 10px; width: 25%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Documentation|title= Documentation}}
{{Box|subject= Examples|title= Examples}}
|}
39ce4573a8eb50bfeffc79144dc2ce70af1bd9e0
23
22
2013-01-13T15:12:10Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 66%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= News|title= Latest developments}}
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 50%; vertical-align: top;" |
{{Box|subject= Documentation|title= Documentation}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
{{Box|subject= Examples|title= Examples}}
|}
| style="padding: 0 0 0 10px; width: 33%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Introduction|title= Quick start}}
{{Box|subject= Downloads|title= Downloads}}
|}
0294bc66e8cef5a6bd78c7b87a5754ad25fd7ee2
Template:Help Us
10
7
34
33
2013-01-13T14:58:53Z
Polas
1
wikitext
text/x-wiki
<!--<div style="margin: 0 0 15px 0; padding: 0.2em; background-color: #EFEFFF; color: #000000; border: 1px solid #9F9FFF; text-align: center;">
'''Mesham always needs your help! See the [[Wish List]] for more information.'''
</div>-->
95023eb69f0fb5c9b3b39fe0bea0b51a2c337ec8
Template:Box
10
6
29
28
2013-01-13T15:05:32Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.6em 0.8em;">
<h2 style="margin:0;background-color:#CEDFF2;font-size:120%;font-weight:bold;border:1px solid #A3B0BF;text-align:left;color:#000;padding:0.2em 0.4em;">{{{title}}}</h2>
{{{{{subject}}}}}
</div>
c1ecd7ac40774eb1bcb5bf494422f0e44b61937e
30
29
2013-01-13T15:10:28Z
Polas
1
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 0px solid #CEDFF2; padding:0.6em 0.8em;">
<h2 style="margin:0;background-color:#CEDFF2;font-size:120%;font-weight:bold;border:1px solid #A3B0BF;text-align:left;color:#000;padding:0.2em 0.4em;">{{{title}}}</h2>
{{{{{subject}}}}}
</div>
0c34a4fcc1c10a40fb3864504b45326b1b8e02d5
Template:News
10
209
1132
2013-01-13T15:08:52Z
Polas
1
Created page with '* New site design'
wikitext
text/x-wiki
* New site design
7d879615f55c800d6892f02c8ed0ce845403c5bb
Template:Applicationbox
10
210
1148
2013-01-13T17:05:04Z
Polas
1
Created page with '{| class="infobox bordered" style="background-color:#DDDDDD; border-color:#111111; border-style:solid; border-width:1px; float:right; font-size:90%; margin:5px 5px 5px 5px; text-…'
wikitext
text/x-wiki
{| class="infobox bordered" style="background-color:#DDDDDD; border-color:#111111; border-style:solid; border-width:1px; float:right; font-size:90%; margin:5px 5px 5px 5px; text-align:left; width:30em;"
|-
| colspan="2" style="text-align:center; font-size: large;" | '''{{{name}}}'''
|-
! Icon:
| [[Image:{{{image}}}|left|{{{caption}}}]]
|-
! Maintained by:
| {{{maintainer}}}
|-
! Description:
| {{{desc}}}
|-
! OS Restrictions:
| {{{os}}}
|-
! Languages:
| {{{languages}}}
|-
! Alternatives:
| {{{alt}}}
|-
! Website:
| {{{url}}}
|}
435ce342ff0a8da78e763826e504896404bf1578
1149
1148
2013-01-13T17:08:41Z
Polas
1
wikitext
text/x-wiki
{| class="infobox bordered" style="background-color:#DDDDDD; border-color:#111111; border-style:solid; border-width:1px; float:right; font-size:90%; margin:5px 5px 5px 5px; text-align:left; width:30em;"
|-
| colspan="2" style="text-align:center; font-size: large;" | '''{{{name}}}'''
|-
! Icon:
| [[Image:{{{image}}}|left|{{{caption}}}]]
|-
! Description:
| {{{desc}}}
|-
! Version:
| {{{version}}}
|-
! Released:
| {{{released}}}
|-
! Author:
| {{{author}}}
|-
! Website:
| {{{url}}}
|}
22cce7a679ed7459a3a917991418a2fc61831c0a
File:Mesham.gif
6
211
1151
2013-01-13T17:23:43Z
Polas
1
Mesham arjuna logo
wikitext
text/x-wiki
Mesham arjuna logo
18147eae74106487894c9dcbd40dd8088e84cfd0
Download 0.5
0
158
865
864
2013-01-13T17:24:44Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Version 0.5|author=[[User:polas|Nick Brown]]|desc=The latest release from the Arjuna compiler line. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.5|released=January 2010}}
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
89bfa37123166eea8f34efdb0a4b78550c464a5f
866
865
2013-01-13T17:25:24Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Version 0.5|author=[[User:polas|Nick Brown]]|desc=The latest release from the Arjuna compiler line. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.5|released=January 2010}}
''Please Note: This version of Mesham is deprecated, if possible please use the latest version on the website''
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
f512743baa5545f736a0a048dd350f1d9e86969b
867
866
2013-01-13T17:28:41Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Mesham 0.5|author=[[User:polas|Nick Brown]]|desc=The latest release from the Arjuna compiler line. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.5|released=January 2010}}
''Please Note: This version of Mesham is deprecated, the documentation and examples on this website are no longer compatible with this version.''
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language from the [[Arjuna]] line and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
f8ba7b9e7768083ed0fc63c6a4db07efc532645b
Download 0.41 beta
0
37
199
198
2013-01-13T17:26:27Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Version 0.41b|author=[[User:polas|Nick Brown]]|desc=The first release from the Arjuna compiler line and the last version to work on Windows. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.41b|released=September 2008}}
''Please Note: This version of Mesham is deprecated, if possible please use the latest version on the website''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
eccb00f4a4d6dd1c774f49266b0c8bb8ebd50776
200
199
2013-01-13T17:27:59Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Mesham 0.41b|author=[[User:polas|Nick Brown]]|desc=The first release from the Arjuna compiler line and the last version to work on Windows. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.41b|released=September 2008}}
''Please Note: This version of Mesham is deprecated, the documentation and examples on this website are no longer compatible with this version.''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
3f1b8f553ee2211ed7914b48e66b50489034426c
File:Runtimelibrary.png
6
212
1153
2013-01-13T17:32:23Z
Polas
1
Runtime library icon
wikitext
text/x-wiki
Runtime library icon
4cdf1b63469639f8e3882a9cb001ce3c1443d3fa
Download rtl 0.2
0
159
873
872
2013-01-13T17:32:51Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Runtime library 0.2|author=[[User:polas|Nick Brown]]|desc=The runtime library required for Mesham 0.5.|url=http://www.mesham.com|image=Runtimelibrary.png|version=0.2|released=January 2010}}
''Please Note: This version of the runtime library is deprecated but required for [[Download_0.5|Mesham 0.5]]''
== Runtime Library Version 0.2 ==
Version 0.2 is currently the most up-to-date version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
You can download the [http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2 here] (28KB)
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
40c1577bf31a445fe49e9155a3d95f13b5effb1b
Download rtl 0.1
0
145
815
814
2013-01-13T17:34:19Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Runtime library 0.1|author=[[User:polas|Nick Brown]]|desc=The runtime library required for Mesham 0.41b.|url=http://www.mesham.com|image=Runtimelibrary.png|version=0.1|released=September 2008}}
''Please Note: This version of the runtime library is deprecated but required for [[Download_0.41_beta|Mesham 0.41b]]''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here] ''(Source cross platform compatible.)''
You can download version 0.1 of the [http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library here] ''(Binary for Windows 32 bit.)''
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
884aa7f2bd384f8262f055306485a2d4b15c630f
Arjuna
0
175
932
931
2013-01-13T17:41:09Z
Polas
1
wikitext
text/x-wiki
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
446881d699c32624bcca69d3d3ae7b296bcea712
933
932
2013-01-13T17:43:23Z
Polas
1
wikitext
text/x-wiki
[[File:mesham.gif|right]]
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
b1057ca20329c172027c55bcf589a7c06c605dfe
Oubliette
0
176
939
938
2013-01-13T17:47:22Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
6cfd8c8409cd3dd6ae4336937eb81c33e9647117
File:Spec.png
6
213
1155
2013-01-13T17:53:39Z
Polas
1
Language specification
wikitext
text/x-wiki
Language specification
a6c03d5a30547b6c09595ea22f0dbebbeef99f62
Specification
0
177
975
974
2013-01-13T17:54:04Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Specification 1.0a_3|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham language specification|url=http://www.mesham.com|image=Spec.png|version=1.0a_3|released=November 2012}}
''The latest version of the Mesham language specification is 1.0a_3''
== Version 1.0a_3 - November 2012 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_3 is available for download. This version was released November 2012 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a3.pdf this latest version here]
1ca281f3f369c63edee6098490e66960d798dea5
What is Mesham
0
15
96
95
2013-01-13T17:55:57Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, as the hardware has matured, a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
46c6e8fe76b61074ffc5984bf4554b8f80832120
Group
0
181
994
993
2013-01-13T17:57:38Z
Polas
1
wikitext
text/x-wiki
''Since: Version 1.0''
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
d1ef6e128a439384416806fbf1f7397cc4478206
995
994
2013-01-13T18:00:48Z
Polas
1
wikitext
text/x-wiki
''Since: Version 1.0''
{{DISPLAYTITLE:xyz}}
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
1be91b3965268026c9db6626ece27dbfa04cb71a
996
995
2013-01-13T18:01:33Z
Polas
1
wikitext
text/x-wiki
{{DISPLAYTITLE:xyz}}
''Since: Version 1.0''
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
b534a2649c0eef1b86a6fd039a78f4a62d20ac37
997
996
2013-01-13T18:05:18Z
Polas
1
wikitext
text/x-wiki
{{DISPLAYTITLE:Group - since version 1.0}}
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
3efc0362716f8ac74efc986d441589f315b73218
998
997
2013-01-13T18:05:27Z
Polas
1
wikitext
text/x-wiki
{{DISPLAYTITLE:Group - ''since version 1.0''}}
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
[[Category:Parallel]]
fbe257a746d8cf55d29b4f33c717d4cd83686dd9
999
998
2013-01-13T18:06:25Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
''Since:'' Version 1.0
[[Category:Parallel]]
01775943c404727d807b4ccdc5ffca6f903aa4f5
1000
999
2013-01-13T18:08:31Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loo, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
The code fragment will involve 4 processes (0 to 3 inclusive.) Only process zero and process three will display the message.
''Since: Version 1.0''
[[Category:Parallel]]
9a939c99ac7e42ee821d066cbe9937f36031ebe9
Par
0
39
216
215
2013-01-13T18:06:52Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
''Since:'' Version 0.41b
[[Category:Parallel]]
6fe4bb9df3a91428028928d39342c165767b0f47
217
216
2013-01-13T18:08:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
var p;
par p from 0 to 9 {
print("Hello world\n");
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
''Since: Version 0.41b''
[[Category:Parallel]]
aac556c9e7a43901c43cf4c2143c1fd481bc9897
Proc
0
40
226
225
2013-01-13T18:07:08Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
''Since:'' Version 0.41b
[[Category:Parallel]]
8559042ebc2755309ab5d7fbd4d7045a94feb1b0
227
226
2013-01-13T18:08:53Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
''Since: Version 0.41b''
[[Category:Parallel]]
29d8dc29341cac605a34cbece77d2f0f25066b29
Sync
0
41
234
233
2013-01-13T18:07:20Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
sync name;
Where the optional ''name'' is a variable.
== Semantics ==
Will synchronise processes and acts as a blocking call involving all processes. This keyword is linked with default shared memory communication and other types. Omitting the variable name will result in synchronisation for all appropriate constructs. This can be thought of as a barrier, and the value of a variable can only be guaranteed after the appropriate barrier has completed.
''Since:'' Version 0.5
[[Category:Parallel]]
665f4a1748111740a6161c9c13eb6e1b6dcfefc1
235
234
2013-01-13T18:09:04Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
sync name;
Where the optional ''name'' is a variable.
== Semantics ==
Will synchronise processes and acts as a blocking call involving all processes. This keyword is linked with default shared memory communication and other types. Omitting the variable name will result in synchronisation for all appropriate constructs. This can be thought of as a barrier, and the value of a variable can only be guaranteed after the appropriate barrier has completed.
''Since: Version 0.5''
[[Category:Parallel]]
a388fc63f8d0b122c050c31f6015466ff6636f60
Include
0
179
985
984
2013-01-13T18:07:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
''Since:'' Version 1.0
[[Category:preprocessor]]
ce8f81ed4f1896ad804d67aac09ae61852c4b69d
986
985
2013-01-13T18:07:57Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
''Since: Version 1.0''
[[Category:preprocessor]]
89ecdfa69e81e809e4cf9bab090f498a61b970b0
Include once
0
180
989
988
2013-01-13T18:08:09Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include_once [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location IF AND ONLY IF that specific file has not already been included before. This is a very useful mechanism to avoid duplicate includes when combining together multiple libraries.
== Example ==
#include_once "test.mesh"
#include_once "test.mesh"
The preprocessing stage will replace the first include with the contents of ''test.mesh'', but the second include_once will be ignored because that specific file has already been included. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
''Since: Version 1.0''
[[Category:preprocessor]]
67f9cfa9082f92d9e2e06a21298503496b646339
Assignment
0
26
139
138
2013-01-13T18:09:24Z
Polas
1
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a memory reference and ''rvalue'' a memory reference or expression
== Semantics==
Will assign a ''lvalue'' to ''rvalue''.
== Examples==
var i:=4;
var j:=i;
In this example the variable ''i'' will be declared and set to value 4, and the variable ''j'' also declared and set to the value of ''i'' (4.) Via type inference the types of both variables will be that of ''Int''
''Since: Version 0.41b''
[[Category:sequential]]
7e661cd93fbbfc53f493a449e6c75e7c07303b38
Break
0
29
156
155
2013-01-13T18:09:32Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
break;
== Semantics ==
Will break out of the current enclosing loop body
== Example ==
while (true) { break; };
Only one iteration of the loop will complete, where it will break out of the body.
''Since: Version 0.41b''
[[Category:sequential]]
4b8816bcc32d0ec516cfe0d9179768e251dae9ad
If
0
32
172
171
2013-01-13T18:09:43Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
if (condition)<br>
{<br>
then body<br>
} else {<br>
else body<br>
};<br>
== Semantics ==
Will evaluate the condition and, if true will execute the code in the ''then body.'' Optionally, if the condition is false then the code in the ''else body'' will be executed if this has been supplied by the programmer.
== Example ==
#include <io>
if (a==b) {
print("Equal");
};
In this code example two variables ''a'' and ''b'' are tested for equality. If equal then the message will be displayed. As no else section has been specified then no specific behaviour will be adopted if they are unequal.
''Since: Version 0.41b''
[[Category:sequential]]
5590ba303c971c1461e98eed47ac6002b319e838
Currenttype
0
99
554
553
2013-01-13T18:09:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
currentype varname;
== Semantics ==
Will return the current type of the variable.<br><br>
''Note:'' If a variable is used within a type context then this is assumed to be shorthand for the current type of that variable<br>
''Note:'' This is a static construct and hence only available during compilation. It must be statically deducible and not used in a manner that is dynamic.
== Example ==
var i: Int;
var q:currentype i;
Will declare ''q'' to be an integer the same type as ''i''.
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
52a2b936d5084e63e42b63fd210bbbe1cbd2ab25
Declaration
0
24
131
130
2013-01-13T18:10:07Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
All variables must be declared before they are used. In Mesham one may declare a variable via its value or explicit type.
var name;<br>
var name:=[Value];<br>
var name:[Type];<br>
Where ''name'' is the name of the variable being declared.
== Semantics ==
The environment will map the identifier to storage location and that variable is now usable. In the case of a value being specified then the compiler will infer the type via type inference either here or when the first assignment takes place.<br><br>
''Note:'' It is not possible to declare a variable with the value ''null'' as this is a special, no value, placer and as such has no type.
== Examples ==
var a;
var b:=99;
a:="hello";
In the code example above, the variable ''a'' is declared, without any further information the type is infered by its first use (to hold type String.) Variable ''b'' is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes.
var t:Char;
var z:Char :: allocated[single[on[2]]];
Variable ''t'' is declared to be a character, without further type information it is also assumed to be on all processes (by default the type Char is allocated to all processes.) Lastly, the variable ''z'' is declared to be of type character, but is allocated only on a single process (process 2.)
''Since: Version 0.41b''
[[Category:sequential]]
b50ad9fc4120709ccfb49bb24640506985b364bd
Declaredtype
0
100
560
559
2013-01-13T18:10:34Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
declaredtype name
Where ''name'' is a variable name
== Semantics ==
Will return the declared type of the variable.<br><br>
''Note:'' This is a static construct only and its lifetime is limited to during compilation.
== Example ==
var i:Int;
i:i::const[];
i:declaredtype i;
This code example will firstly type ''i'' to be an [[Int]]. On line 2, the type of ''i'' is combined with the type [[const]] (enforcing read only access to the variable's data.) On line 3, the programmer is reverting the variable back to its declared type (i.e. so one can write to the data.)
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
af9dd909ebadb7b042457217b2d7a7a802f0ee45
For
0
27
146
145
2013-01-13T18:10:48Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
for i from a to b <br>
{<br>
forbody<br>
{
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
#include <io>
#include <string>
var i;
for i from 0 to 9 {
print(itostring(i)+"\n");
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
''Since: Version 0.41b''
[[Category:sequential]]
56e5308ede19b0b19529cae8d67c806eaf8674fc
Sequential Composition
0
34
179
178
2013-01-13T18:11:10Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
body ; body
== Semantics ==
Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequential composition.<br><br>
''Note:'' Unlike many imperative languages, all blocks must be terminated by a form of composition (sequential or parallel.)
== Examples ==
var a:=12 ; a:=99
In the above example variable ''a'' is declared to be equal to 12, after this the variable is then modified to hold the value of 99.
function1() ; function2()
In the second example ''function1'' will execute and then after (if it terminates) the function ''function2'' will be called.
''Since: Version 0.41b''
[[category:sequential]]
31b6ddb970e8cdbe5b98a513008c7a9f074db8c3
Skip
0
42
239
238
2013-01-13T18:11:26Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
skip
== Semantics ==
Does nothing!
''Since: Version 0.41b''
[[Category:Sequential]]
a6518135018132abcab4e83ca85db2a4e376eb27
Throw
0
31
166
165
2013-01-13T18:11:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
#include <io>
try {
throw "an error"
} catch "an error" {
print("Error occurred!\n");
};
In this example, a programmer defined error ''an error'' is thrown and caught.
''Since: Version 0.41b''
[[Category:sequential]]
0db367c9efa80f5b19a3c558f7f034606560f0f5
167
166
2013-01-13T18:11:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
#include <io>
try {
throw "an error"
} catch "an error" {
print("Error occurred!\n");
};
In this example, a programmer defined error ''an error'' is thrown and caught.
''Since: Version 0.5''
[[Category:sequential]]
cd246b45bdc8de69b78bd969d1997457acd230f7
Try
0
30
161
160
2013-01-13T18:12:12Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
try<br>
{<br>
try body<br>
} catch (error string) { <br>
error handing code<br>
}<br>
== Semantics ==
Will execute the code in the try body and handle any errors. This is very important in parallel computing as it allows the programmer to easily deal with any communication errors that may occur. Exception handling is dynamic in Mesham and the last appropriate catch block will be entered into depending on program flow.
== Error Strings ==
There are a number of error strings build into Mesham, additional ones can be specified by the programmer.
*Array Bounds - Accessing an array outside its bounds
*Divide by zero - Divide by zero error
*Memory Out - Memory allocation failure
*root Illegal - root process in communication
*rank Illegal - rank in communication
*buffer Illegal - buffer in communication
*count - Count wrong in communication
*type - Communication type error
*comm - Communication communicator error
*truncate - Truncation error in communication
*Group - Illegal group in communication
*op - Illegal operation for communication
*arg - Arguments used for communication incorrect
*oscli - Error returned by operating system when performing a system call
== Example ==
#include <io>
#include <string>
try {
var t:array[Int,10];
print(itostring(a[12]));
} catch ("Array Bounds") {
print("No Such Index\n");
};
In this example the programmer is trying to access element 12 of array ''a''. If this does not exist, then instead of that element being displayed an error message is put on the screen.
''Since: Version 0.5''
[[Category:sequential]]
f0e7036b2e9d666b885600ba28593eeecd6b88ca
While
0
28
151
150
2013-01-13T18:12:27Z
Polas
1
wikitext
text/x-wiki
==Syntax==
while (condition) whilebody;
==Semantics==
Will loop whilst the condition holds.
== Examples ==
var a:=10;
while (a > 0) {
a--;
};
Will loop, each time decreasing the value of variable ''a'' by 1 until the value is too small (0).
''Since: Version 0.41b''
[[Category:Sequential]]
c318fb0d4e8257b34bee54bc04b8c8dffb0f3844
Type Variables
0
101
565
564
2013-01-13T18:12:52Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
typevar name::=type;
name::=type;
Note how ''::='' is used rather than '':=''
''typevar'' is the type equivalent of ''var''
== Semantics ==
Type variables allow the programmer to assign types and type combinations to variables for use as normal program variables. These exist only statically (in compilation) and are not present in the runtime semantics.
== Example ==
typevar m::=Int :: allocated[multiple[]];
var f:m;
typevar q::=declaredtype f;
q::=m;
In the above code example, the type variable ''m'' has the type value ''Int :: allocated[multiple[]]'' assigned to it. On line 2, a new (program) variable is created using this new type variable. In line 3, the type variable ''q'' is declared and has the value of the declared type of program variable ''f''. Lastly in line 4, type variable ''q'' changes its value to become that of type variable ''m''. Although type variables can be thought of as the programmer creating new types, they can also be used like program variables in cases such as equality tests and assignment.
''Since: Version 0.5''
[[Category:Types]]
d4a3fdb583916c9eecfd9cca78ae1feab0e37297
Functions
0
38
207
206
2013-01-13T18:13:44Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Syntax ==
function returntype name(arguments)
== Semantics ==
The type of the variable depends on the pass semantics (by reference or value.) Broadly, all [[:Category:Element Types|element types]] types by themselves are pass by value and [[:Category:Compound Types|compound types]] are pass by reference; although this behaviour can be overridden by additional type information. Memory allocated onto the heap is pass by reference, static or stack frame memory is pass by value.
== Example ==
function Int add(var a:Int,var b:Int) {
return a + b;
};
This function takes two integers and will return their sum.
function void modify(var a:Int::heap) {
a:=88;
}
In this code example, the ''modify'' function will accept an integer variable but this is allocated on the heap (pass by reference.) The assignment will modify the value of the variable being passed in and will still be accessible once the function has terminated.
== The main function ==
Returns void and can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name. The main function is the program entry point, it is fine for this not to be present in a Mesham code as it is then just assumed that that code is a library and only accessed via linkage.
''Since: Version 0.41b''
[[Category:Core Mesham]]
fc9783343bc5546678a752b604bf3cb51a1320c1
Allocated
0
62
333
332
2013-01-13T18:14:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allocated[type]
Where ''type'' is optional
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
var i: Int :: allocated[];
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
aeeae4b28a0634483674aa186967c54d01ef2ade
Allreduce
0
82
451
450
2013-01-13T18:14:32Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::allreduce["min"]):=p;
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
== Supported operations ==
{{ Template:ReductionOperations }}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
fbf10842d22a9bed910906113386c3a51f04908f
452
451
2013-01-13T18:15:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::allreduce["min"]):=p;
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
149f222ef370aab9ea1996d2fe45d69a2e3f0e4e
Alltoall
0
81
445
444
2013-01-13T18:15:45Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
alltoall[elementsoneach]
== Semantics ==
Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in the group.
== Example ==
x:array[Int,12]::allocated[multiple[]];
var r:array[Int,3]::allocated[multiple[]];
var p;
par p from 0 to 3
{
(x:alltoall[3]):=r;
};
In this example each process sends every other process three elements (the elements in its ''r''.) Therefore each process ends up with twelve elements in ''x'', the location of each is based on the source processes's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
22d465522d80ba41d34af865f40604cb4f7b3d59
Array
0
71
387
386
2013-01-13T18:15:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| Communication to process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
32c20befdc707600668b77fc5cdae9ed5ad78cd1
Async
0
83
457
456
2013-01-13T18:16:18Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2
{
a:=89;
var q:=20;
q:=a;
sync q;
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
83297e132ac74e9237061790805676e6095dbdd5
Blocking
0
84
463
462
2013-01-13T18:16:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
32291074716fa4ebe9ce92e66cb482ab3cd3b90c
Bool
0
49
277
276
2013-01-13T18:16:52Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Bool
== Semantics ==
A true or false value
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Bool;
var x:=true;
In this example variable ''i'' is explicitly declared to be of type ''Bool''. Variable ''x'' is declared to be of value ''true'' which via type inference results in its type also becomming ''Bool''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
15e1a46b96a28a0a580824125a8c4a30bc8e2ee6
Broadcast
0
78
430
429
2013-01-13T18:17:06Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(a::broadcast[2]):=23;
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
3e178c56de20cbbdb1cd316ea702c6a883597980
Buffered
0
87
482
481
2013-01-13T18:17:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
70d699448a44c940ddb33576b08d10e72c7fd2ed
Channel
0
74
406
405
2013-01-13T18:17:29Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.<br><br>
''Note:'' By default (no further type information) all channel communication is blocking using standard send.<br>
''Note:'' If no allocation information is specified with the channel type then the underlying variable will not be assigned any memory - it is instead an abstract connection in this case.
== Example ==
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2 {
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
9be50ee7085adf909c3a7ab6b96199c953aaffff
Char
0
50
283
282
2013-01-13T18:17:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An 8 bit ASCII character
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Char;
var r:='a';
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
dd4b445b57d931ca1dd48bea2561d731ea86a4b5
Col
0
73
400
399
2013-01-13T18:18:02Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In column major allocation the first dimension is the least major and last dimension most major
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
5122f17f4523b323cde47013319d3c4537589696
Commgroup
0
64
345
344
2013-01-13T18:18:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist.
== Example ==
var i:Int :: allocated[multiple[commgroup[1,3]]];
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
9e7331885d77eb6045981210a772092e622b6443
Const
0
66
356
355
2013-01-13T18:18:27Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
const[ ]
== Semantics ==
Enforces the read only property of a variable.
== Example ==
var a:Int;
a:=34;
a:(a :: const[]);
a:=33;
The code in the above example will produce an error. Whilst the first assignment (''a:=34'') is legal, on the subsequent line the programmer has modified the type of ''a'' to be that of ''a'' combined with the type ''const''. The second assignment is attempting the modify a now read only variable and will fail.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
0691ccfabe0808ab0ed005dba1854158caf45f04
Directref
0
70
376
375
2013-01-13T18:18:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
var pid:Int :: allocated[multiple[]] :: directref[];
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
87970a95e94e77d808aad62afb188c0d068d5fcc
Double
0
48
272
271
2013-01-13T18:19:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision 64 bit floating point number. This is the type given to constant floating point numbers that appear in program code.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Double;
In this example variable ''i'' is explicitly declared to be of type ''Double''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
6ce108184744c4540de85add20e523e0efce9f0f
Evendist
0
95
526
525
2013-01-13T18:19:34Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3
{
var q:=b[p][2][3];
var r:=a[p][2][3];
var s:=b :: horizontal[][p][2][3];
};
a:=e;
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
799e2bf7da28c7fb0a0ce2646905ae8b9d14b5bf
Extern
0
69
371
370
2013-01-13T18:19:53Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
extern[]
== Semantics ==
Provided as additional allocation type information, this tells the compiler NOT to allocate memory for the variable as this has been already done externally.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
6756c1cd74419a93ab7119eaed8b0055ef7258ff
File
0
52
295
294
2013-01-13T18:20:08Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:File;
In this example variable ''i'' is explicitly declared to be of type ''File''.
''Since: Version 0.41b''
== Communication ==
It is not currently possible to communicate file handles due to operating system constraints.
[[Category:Element Types]]
[[Category:Type Library]]
265ee82bea801315c5d54e62f3e0b37e7b1b4c69
Float
0
47
266
265
2013-01-13T18:20:23Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A 32 bit floating point number
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Float;
In this example variable ''i'' is explicitly declared to be of type ''Float''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
b1e2ea345439d25f6c91f8af551d7f005d9a283a
Gather
0
79
435
434
2013-01-13T18:20:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
gather[elements,root]
== Semantics ==
Gather a number of elements (equal to ''elements'') from each process and send these to the root process.
== Example ==
var x:array[Int,12] :: allocated[single[on[2]]];
var r:array[Int,3] :: allocated[multiple[]];
var p;
par p from 0 to 3
{
(x::gather[3,2]):=r;
};
In this example, the variable ''x'' is allocated on the root process (2) only. Whereas ''r'' is allocated on all processes. In the assignment all three elements of ''r'' are gathered from each process and sent to the root process (2) and then placed into variable ''x'' in the order defined by the source's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
2a6b3db9f95c8058139539f857b38d799b26dbc5
Heap
0
185
1023
1022
2013-01-13T18:20:59Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
var i:Int :: allocated[heap];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 0.1''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
ef065de021dc7e00cedd1cea354a9ca04a1a27eb
1024
1023
2013-01-13T18:21:09Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
var i:Int :: allocated[heap];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
bf0a685210511c771054c270fbaf7caa153abeff
Horizontal
0
90
505
504
2013-01-13T18:21:54Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
4f8af2ff9d759630b6075cbb3b5fdd2017f2065c
Int
0
45
254
253
2013-01-13T18:22:12Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number. This is also the type of integer constants.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Int;
var b:=12;
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
946fa827a5c8c43618bdf7c0c313ff23b5c62f9e
Long
0
53
300
299
2013-01-13T18:22:25Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Long
== Semantics ==
A long 64 bit number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Long;
In this example variable ''i'' is explicitly declared to be of type ''Long''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
4628fdd88387d51a8ed739fdb72e5dd4d3f75bb2
Multiple
0
63
338
337
2013-01-13T18:22:38Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope.
== Example ==
var i: Int :: allocated[multiple[]];
In this example the variable ''i'' is an integer, allocated to all processes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
caf11a957776bcc58e432affbd6ac8036fdba73f
Nonblocking
0
85
469
468
2013-01-13T18:22:47Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
34e4e11c0befde956bd5701e79c2e08e85c49f5b
Onesided
0
76
416
415
2013-01-13T18:22:58Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
6a121aa437c550d3767105c331647099afccb1f1
417
416
2013-01-13T18:23:19Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
919c6c7087ebcb388eecedc0e57177d2060926b9
Pipe
0
75
411
410
2013-01-13T18:23:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
pipe[a,b]
== Semantics ==
Identical to the [[Channel]] type, except pipe is bidirectional rather than unidirectional.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
f691875ec9792acb345b209a1b3a8266ef975af4
Ready
0
88
489
488
2013-01-13T18:23:50Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
98a403b370884a5bdb654b45953bf9183268ade3
Record
0
96
534
533
2013-01-13T18:24:07Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
== Example ==
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
d18df1efac6f7771fef5a801b7dccc88fe69beb0
Reduce
0
77
424
423
2013-01-13T18:24:25Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Example ==
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
35118480ee7c273e499d1bfabc519296823c1b3d
Referencerecord
0
97
541
540
2013-01-13T18:24:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[heap]]
''Currently communication is not available for reference records, this will be fixed at some point in the future.''
== Example ==
#include <io>
#include <string>
typevar node;
node::=referencerecord["prev",node,"Int",data,"next",node];
var head:node;
head:=null;
var i;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null) {
print(itostring(head.data)+"\n");
head:=head.next;
};
In this code example a doubly linked list is created, and then its contents read node by node.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
72927ca373e5a76b2e94d7b282ad0208120ba029
Row
0
72
394
393
2013-01-13T18:25:00Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In row major allocation the first dimension is the most major and the last most minor.
== Example ==
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
f6fb926956f23477f1754e7159dc8dad8d260b0b
Scatter
0
80
440
439
2013-01-13T18:25:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
scatter[elements,root]
== Semantics ==
Will send a number of elements (equal to ''elements'') from the root process to all other processes.
== Example ==
var x:array[Int,3]::allocated[multiple[]];
var r:array[Int,12]::allocated[multiple[]];
var p;
par p from 0 to 3
{
x:(x::scatter[3,1]);
x:=r;
};
In this example, three elements of array ''r'', on process 1, are scattered to each other process and placed in their copy of ''r''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
65fd6148d8d8f1fae5c270ea2592b1bf3da05aaf
Share
0
68
365
364
2013-01-13T18:25:34Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
share[name]
== Semantics ==
This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the memory of that specified as arguments to the type.) This is very useful in HPC applications as often processes are running at the limit of their resources. The type will share memory with that of the variable ''name'' in the above syntax. In order to keep this type safe, the sharee must be smaller than or of equal size to the memory chunk, this is error checked.
== Example ==
var a:Int::allocated[multiple[]];
var c:Int::allocated[multiple[] :: share[a]];
var e:array[Int,10]::allocated[single[on[1]]];
var u:array[Char,12]::allocated[single[on[1]] :: share[e]];
In the example above, the variables ''a'' and ''c'' will share the same memory. The variables ''e'' and ''u'' will also share the same memory. There is some potential concern that this might result in an error - as the size of ''u'' array is 12, and size of ''e'' array is only 10. If the two arrays have different types then this size will be checked dynamically - as an int is 32 bit and a char only 8 then this sharing of data would work in this case.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
44e98b4ee7a0c9f4a5d262491a67470508d7a5dc
Short
0
182
1008
1007
2013-01-13T18:25:50Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Short
== Semantics ==
A single whole, 16 bit, number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:Short;
In this example variable ''i'' is explicitly declared to be of type ''Short''.
''Since: Version 1.0''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
a83da899d72f4326891dc4ca89fd49f1c67fef0c
Single
0
65
351
350
2013-01-13T18:26:11Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
single[type]
single[on[process]]
where ''type'' is optional
== Semantics ==
Will allocate a variable to a specific process. Most commonly combined with the ''on'' type which specifies the process to allocated to, but not required if this can be inferred. Additionally the programmer will place a distribution type within ''single'' if dealing with distributed arrays.
== Example ==
var i:Int :: allocated[single[on[1]]];
In this example variable ''i'' is declared as an integer and allocated on process 1.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
1082b31fd183a3c6a78e041af12e33fd7ad223af
Stack
0
184
1017
1016
2013-01-13T18:26:27Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[stack];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
acdfa4ed22603bd2e5e7060a5e785412d635e91e
Standard
0
86
475
474
2013-01-13T18:26:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
1b71110cc72efa007c580ba92a65c801da7549ec
Static
0
186
1030
1029
2013-01-13T18:26:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
var i:Int :: allocated[static];
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
ef49a32e369b0fe9898bd0b5135fc4ef43d6be00
String
0
51
289
288
2013-01-13T18:27:10Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters. All strings are immutable, concatenation of strings will in fact create a new string.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
var i:String;
var p:="Hello World!";
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
678cb8cbb72dba7287ecd37820aaea661db79d12
Synchronous
0
89
495
494
2013-01-13T18:27:24Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
8583edac1780a0ffa01304ad2e71619d964a4a02
Tempmem
0
67
361
360
2013-01-13T18:27:35Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
tempmem[ ]
== Semantics ==
Used to inform the compiler that the programmer is happy that a call (usually communication) will use temporary memory. Some calls can not function without this and will give an error, others will work more efficiently with temporary memory but can operate without at a performance cost. This type is provided because often memory is at a premium, with applications running towards at their limit. It is therefore useful for the programmer to indicate whether or not using extra, temporary, memory is allowed.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
47a73f661f93b39324cc395041a14797ffe84a76
Vertical
0
91
514
513
2013-01-13T18:27:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically. The figure below illustrates partitioning an array into 4 blocks vertically.
<center>[[Image:vert.jpg|Vertical Partition of an array into four blocks via type oriented programming]]</center>
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
93d4900583880b0dc9749b0b8b0040b1708eb560
Acos
0
192
1055
1054
2013-01-13T18:28:18Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse cosine of
* '''Returns:''' A [[Double]] representing the inverse cosine
== Example ==
#include <maths>
var d:=acos(10.4);
var y:=acos(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
f88f647994e98dea9c3a9bd7908e9e594c9cdf21
Asin
0
193
1061
1060
2013-01-13T18:28:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse sine of
* '''Returns:''' A [[Double]] representing the inverse sine
== Example ==
#include <maths>
var d:=asin(23);
var y:=asin(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
c959bf8d8d651f2e043585a817711492659f111d
Atan
0
194
1067
1066
2013-01-13T18:28:57Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse tangent of
* '''Returns:''' A [[Double]] representing the inverse tangent
== Example ==
#include <maths>
var d:=atan(876.3);
var y:=atan(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
aad1defe1eef49f40bebb417410aab891a58feed
Ceil
0
198
1087
1086
2013-01-13T18:29:21Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This ceil(d) function will find the smallest integer greater than or equal to ''d''.
* '''Pass:''' A [[Double]] to find the ceil of
* '''Returns:''' An [[Int]] representing the ceiling
== Example ==
#include <maths>
var a:=ceil(10.5);
var y:=ceil(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
9cef7d4b7ad3105288e4c5f99466e025a2479f1e
Charat
0
124
679
678
2013-01-13T18:29:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This charat(s,n) function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A [[String]] and [[Int]]
* '''Returns:''' A [[Char]]
== Example ==
#include <string>
var a:="hello";
var c:=charat(a,2);
var d:=charat("test",0);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
2f978b13d398ef402b3d0b8dd39946a9b3618fbb
Close
0
201
1101
1100
2013-01-13T18:30:01Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The close(f) function will close the file represented by handle ''f''
* '''Pass:''' A [[File]] handle
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("myfile.txt","r");
close(f);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
540a766a72e81227c8f5dfdfa78633416149a23c
Complex
0
200
1096
1095
2013-01-13T18:30:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[record]] type with both components as doubles.
== Example ==
#include <maths>
var a:complex;
a.i:=19.65;
a.r:=23.44;
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
cf031593d86f0996042e98b01a7c9b217e10285e
Cos
0
108
592
591
2013-01-13T18:30:37Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos(d) function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find cosine of
* '''Returns:''' A [[Double]] representing the cosine
== Example ==
#include <maths>
var a:=cos(10.4);
var y:=cos(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
7da8c9e58707f127fb28d2fbce025e790ad245f5
Cosh
0
195
1073
1072
2013-01-13T18:31:31Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic cosine of
* '''Returns:''' A [[Double]] representing the hyperbolic cosine
== Example ==
#include <maths>
var d:=cosh(10.4);
var y:=cosh(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
49d1b58ccb3c93abe2cd4a6dbda198b9cc09dba9
Displaytime
0
130
711
710
2013-01-13T18:31:47Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This displaytime() function will display the timing results recorded by the function [[recordtime]] along with the process ID. This is very useful for debugging or performance testing.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
3f06a11df08b2266964a7ead9ded50acbd9a19d2
Dtostring
0
206
1123
1122
2013-01-13T18:31:57Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The dtostring(d, a) function will convert the variable or value ''d'' into a string using the formatting supplied in ''a''.
* '''Pass:''' A [[Double]] and [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:=23.4352;
var c:=dtostring(a, "%.2f");
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
201d66c7fd987169d2e535fc02d537db078d631c
Exit
0
132
719
718
2013-01-13T18:32:13Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This exit() function will cease program execution and return to the operating system. From an implementation point of view, this will return ''EXIT_SUCCESS'' to the OS.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
0fbe682d48df22a1732cf87f79f50ad0c7d81945
Floor
0
109
597
596
2013-01-13T18:32:37Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This floor(d) function will find the largest integer less than or equal to ''d''.
* '''Pass:''' A [[Double]] to find floor of
* '''Returns:''' An [[Int]] representing the floor
== Example ==
#include <maths>
var a:=floor(10.5);
var y:=floor(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
6bf7c9be7ec067315b9ad7b1c508fa13ce9ce66f
Gc
0
208
1130
1129
2013-01-13T18:32:56Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The gc() function will collect any garbage memory. Memory allocated via the [[Heap]] type is subject to garbage collection, which will occur automatically during program execution but can be invoked manually via this function call.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
19028e7244b6e1d98c433c5bd2a9c8f2c2da309a
Getepoch
0
207
1127
1126
2013-01-13T18:33:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This getepoch() function will return the number of milliseconds since the epoch (1st January 1970).
* '''Pass:''' Nothing
* '''Returns:''' [[Long]] containing the number of milliseconds
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
62a04821a2697c24594afdfac428529d7416fc9e
Getprime
0
110
602
601
2013-01-13T18:33:41Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This getprime(n) function will find the ''n''th prime number.
* '''Pass:''' An [[Int]]
* '''Returns:''' An [[Int]] representing the prime
== Example ==
#include <maths>
var a:=getprime(10);
var y:=getprime(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
f1556ccfdd840598e96c88d11c0d0b78272ce98e
Input
0
118
648
647
2013-01-13T18:34:05Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This input(i) function will prompt the user for input via stdin, the result being placed into ''i''
* '''Pass:''' A variable for the input to be written into, of type [[String]]
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:String;
input(f);
print("You wrote: "+f+"\n");
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
4c17ce213e71306a29d736faf60da6e21701c512
Itostring
0
205
1119
1118
2013-01-13T18:34:20Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The itostring(n) function will convert the variable or value ''n'' into a string.
* '''Pass:''' An [[Int]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:=234;
var c:=itostring(a);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
8d8b8ea6411856d77a6008ae28ae12c6b03ccc35
Log
0
111
609
608
2013-01-13T18:34:47Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the natural logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the logarithmic value
== Example ==
#include <maths>
var a:=log(10.54);
var y:=log(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
8d58b5fee7760443139405f5e08d6c91677c9a3b
Log10
0
199
1092
1091
2013-01-13T18:35:11Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the base 10 logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the base 10 logarithmic value
== Example ==
#include <maths>
var a:=log10(0.154);
var y:=log10(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
23d870f92d47a71b3ccc495b1b5b7a964e78258c
Lowercase
0
125
686
685
2013-01-13T18:35:31Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This lowercase(s) function will return the lower case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:="HeLlO";
var c:=lowercase(a);
var d:=lowercase("TeST");
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
8fbd5b8578525d9cc56522b1f2128e136ab8cf45
Mod
0
112
614
613
2013-01-13T18:35:48Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This mod(n,x) function will divide ''n'' by ''x'' and return the remainder.
* '''Pass:''' Two integers
* '''Returns:''' An integer representing the remainder
== Example ==
#include <maths>
var a:=mod(7,2);
var y:=mod(a,a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
8d9632980bcebad1132b0b571983f26be2077592
Open
0
202
1106
1105
2013-01-13T18:36:01Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This open(n,a) function will open the file of name ''n'' with mode of ''a''.
* '''Pass:''' The name of the file to open of type [[String]] and mode of type [[String]]
* '''Returns:''' A file handle of type [[File]]
== Example ==
#include <io>
var f:=open("myfile.txt","r");
close(f);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
4dbb0ffc240e3ef9948e428b81d26f00d0843771
Oscli
0
133
723
722
2013-01-13T18:36:16Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This oscli(a) function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution.
* '''Pass:''' A [[String]] representing the command
* '''Returns:''' Nothing
* '''Throws:''' The error string ''oscli'' if the operating system returns an error to this call
== Example ==
#include <io>
#include <system>
var a:String;
input(a);
try {
oscli(a);
} catch ("oscli") {
print("Error in executing command\n");
};
The above program is a simple interface, allowing the user to input a command and then passing this to the OS for execution. The ''oscli'' call is wrapped in a try-catch block which will detect when the user has request the run of an errornous command, this explicit error handling is entirely optional.
''Since: Version 0.5''
[[Category:Function Library]]
[[Category:System Functions]]
d44768a272f9583fb5b1ab8f83e3e0c793be30d4
PI
0
113
620
619
2013-01-13T18:36:36Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pi() function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A [[Double]] representing PI
== Example ==
#include <maths>
var a:=pi();
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
21749bb0beb9d245a8d4c93e91a49587f722e10b
Pid
0
122
669
668
2013-01-13T18:36:50Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pid() function will return the current processes' ID number.
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the current process ID
== Example ==
var a:=pid();
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
9aa005acdc26915c6cd1f48bcd6cfe50aaf73b5d
Pow
0
114
626
625
2013-01-13T18:37:19Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pow(n,x) function will return ''n'' to the power of ''x''.
* '''Pass:''' Two [[Int|Ints]]
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths>
var a:=pow(2,8);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1649bb57a6ddc26678c647adea604a44054c39c5
Print
0
119
653
652
2013-01-13T18:37:34Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This print(n) function will write a variable of value ''n'' to stdout.
* '''Pass:''' A [[String]] typed variable or value
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:="Hello";
print(f+" world\n");
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
1719dbd34994a5f4fc201d96319b249b7cf827e1
Processes
0
123
674
673
2013-01-13T18:37:47Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This processes() function will return the number of processes
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the number of processes
== Example ==
var a:=processes();
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
08aa607e0a55f15f14a92cef9ec0a9dd0ab0c670
Randomnumber
0
115
631
630
2013-01-13T18:38:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This randomnumber(n,x) function will return a random number between ''n'' and ''x''.
''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and in this case a floating point number is found.''
* '''Pass:''' Two [[Int|Ints]] defining the bounds of the random number
* '''Returns:''' A [[Double]] representing the random number
== Example ==
#include <maths>
var a:=randomnumber(10,20);
var b:=randomnumber(0,1);
In this case, ''a'' is a whole number between 10 and 20, whereas ''b'' is a decimal number.
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
74b10ab54a2d617b6292df51394a3fe98569406d
Readchar
0
120
659
658
2013-01-13T18:38:28Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readchar(f) function will read a character from a file with handle ''f''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read character from
* '''Returns:''' A character from the file type [[Char]]
== Example ==
#include <io>
var f:=open("hello.txt","r");
var u:=readchar(f);
close(f);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
9dea1633bed66592ee6ae5a15f93c35db7984b29
Readline
0
121
664
663
2013-01-13T18:39:03Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readline(f) function will read a line (delimited by the new line character) from a file with handle ''f''. The file handle maintains its position in the file, so after a call to readline the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read the line from
* '''Returns:''' A line of the file type [[String]]
== Example ==
#include <io>
var f:=open("hello.txt","r");
var u:=readline(f);
close(f);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
7c3c88150734f68c7ca3ba5734b03ca624783b0b
Recordtime
0
131
715
714
2013-01-13T18:39:23Z
Polas
1
wikitext
text/x-wiki
This recordtime() function record the current (wall clock) execution time upon reaching that point. This is useful for debugging or performance testing, the time records can be displayed via the [[displaytime]] function.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
e9033859546f9291d8abe65b3a8d7e3700e0c825
Sin
0
190
1043
1042
2013-01-13T18:39:49Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sin(d) function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find sine of
* '''Returns:''' A [[Double]] representing the sine
== Example ==
#include <maths>
var a:=sin(98.54);
var y:=sin(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
f10f02798926f07004764072d6be31f47148b5ce
Sinh
0
196
1078
1077
2013-01-13T18:40:08Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic sine of
* '''Returns:''' A [[Double]] representing the hyperbolic sine
== Example ==
#include <maths>
var d:=sinh(0.4);
var y:=sinh(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
eff3c59dc520b67f9371103d096ebeaf5136cd3c
Sqr
0
116
637
636
2013-01-13T18:40:29Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqr(d) function will return the result of squaring ''d''.
* '''Pass:''' A [[Double]] to square
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths.h>
var a:=sqr(3.45);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
07d1834c2860b801d75839c009df731eb503fa43
Sqrt
0
117
642
641
2013-01-13T18:40:54Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqrt(d) function will return the result of square rooting ''d''.
* '''Pass:''' An [[Double]] to find square root of
* '''Returns:''' A [[Double]] which is the square root
== Example ==
#include <maths>
var a:=sqrt(8.3);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
77a711404fb7b50155405253c6d84e104b8c416b
Strlen
0
126
691
690
2013-01-13T18:41:09Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This strlen(s) function will return the length of string ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
var a:="hello";
var c:=strlen(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
3ae2cb9638dbb5132c53b739d5c91dcda6c5ad78
Substring
0
127
696
695
2013-01-13T18:41:21Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This substring(s,n,x) function will return the string at the position between ''n'' and ''x'' of ''s''.
* '''Pass:''' A [[String]] and two [[Int|Ints]]
* '''Returns:''' A [[String]] which is a subset of the string passed into it
== Example ==
#include <string>
var a:="hello";
var c:=substring(a,2,4);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
00039dc7206c4a90d5b4b19df36ed17c97cb4aba
Tan
0
191
1049
1048
2013-01-13T18:41:48Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This tan(d) function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the tangent of
* '''Returns:''' A [[Double]] representing the tangent
== Example ==
#include <maths>
var a:=tan(0.05);
var y:=tan(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
63228f777309d1781053583c87fb74710dbe91a2
Tanh
0
197
1083
1082
2013-01-13T18:42:07Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic tangent of
* '''Returns:''' A [[Double]] representing the hyperbolic tangent
== Example ==
#include <maths>
var d:=tanh(10.4);
var y:=tanh(d);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
f640c0bc166e1207240702c94b36d1520af1130a
Toint
0
128
701
700
2013-01-13T18:42:24Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This toint(s) function will convert the string ''s'' into an integer.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
var a:="234";
var c:=toint(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
2c9dc6a8320f1b418fae3eeb779920367e24b68f
Uppercase
0
129
706
705
2013-01-13T18:42:38Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This uppercase(s) function will return the upper case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
var a:="HeLlO";
var c:=uppercase(a);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
5eb20bab442a0f395677d6fbdb8e0e890db785e7
Writestring
0
203
1111
1110
2013-01-13T18:42:52Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This writestring(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[String]] to write
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("hello.txt","w");
writestring(f,"hello - test");
close(f);
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
fcf6dbc20160933467c80012736d0032afd6bf0c
Writebinary
0
204
1115
1114
2013-01-13T18:43:03Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This writebinary(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[Int]] variable or value to write into the file in a binary manner
* '''Returns:''' Nothing
== Example ==
#include <io>
var f:=open("hello.txt","w");
writebinary(f,127);
close(f);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
fa754b0738c3b01f0ae001434bffb1414b6232c6
Blocking
0
84
464
463
2013-01-13T18:43:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
55880500f0773466e297762ab742dc7bc96e7c6c
Async
0
83
458
457
2013-01-13T18:43:53Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2
{
a:=89;
var q:=20;
q:=a;
sync q;
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
70685ac9127199bc03c16679f34c34ecd7157900
Buffered
0
87
483
482
2013-01-13T18:44:04Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
47951d45639b3052d8f190b6144d129b130722a7
Nonblocking
0
85
470
469
2013-01-13T18:44:14Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
a40af8798f31c2839faf82c1ac19343fbe28fa20
Ready
0
88
490
489
2013-01-13T18:44:24Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
78e09253b412e10738757975af2cf0513cfb4548
Standard
0
86
476
475
2013-01-13T18:44:32Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
ed929215fc9eb4ad5f9d1b6a0c3b0c02f3d86a4b
Synchronous
0
89
496
495
2013-01-13T18:44:44Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
7e90963f5b5c9315051babba1de6fa01cf3e2555
Oubliette
0
176
940
939
2013-01-13T18:46:38Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
7bf7b617b7f4bf315c820297acdb572f80c52181
Functions
0
38
208
207
2013-01-13T18:50:27Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Syntax ==
function returntype name(arguments)
Where ''returntype'' is a type chain or ''void''.
== Semantics ==
The type of the variable depends on the pass semantics (by reference or value.) Broadly, all [[:Category:Element Types|element types]] types by themselves are pass by value and [[:Category:Compound Types|compound types]] are pass by reference; although this behaviour can be overridden by additional type information. Memory allocated onto the heap is pass by reference, static or stack frame memory is pass by value.
== Example ==
function Int add(var a:Int,var b:Int) {
return a + b;
};
This function takes two integers and will return their sum.
function void modify(var a:Int::heap) {
a:=88;
}
In this code example, the ''modify'' function will accept an integer variable but this is allocated on the heap (pass by reference.) The assignment will modify the value of the variable being passed in and will still be accessible once the function has terminated.
== Function prototypes ==
Instead of specifying the entire function, the programmer may just provide the prototype (no body) of the function and resolution will be deferred until link time. This mechanism is most popular for using functions written in other languages, however you must use the '''native''' modifier with native function prototypes.
=== Native function example ===
function native void myNativeFunction(var a:Int);
== The main function ==
Returns void and can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name. The main function is the program entry point, it is fine for this not to be present in a Mesham code as it is then just assumed that that code is a library and only accessed via linkage.
''Since: Version 0.41b''
[[Category:Core Mesham]]
c03af1f16d2ef0f0131447ab3b4f44ce205343c7
Tutorial - Hello world
0
214
1157
2013-01-14T13:45:37Z
Polas
1
Created page with '== Introduction == In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a p…'
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
0619852cfd1a4f7c159aa3920fc8e0644ea71a8d
1158
1157
2013-01-14T13:51:40Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to:
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
619331dc13d7c8a6c3a9af7c6a208cba827a97c8
1159
1158
2013-01-14T13:57:27Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
9aec051f35a909f7e9b5dafe3a857ebb6d713f76
1160
1159
2013-01-14T14:03:15Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example.
01865e42c7a12c76d2ce478f9bfd0ccf9145aaef
1161
1160
2013-01-14T14:05:09Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
ce1d0f35abf4a1aa7e96c97e80788f1615d063f9
1162
1161
2013-01-14T14:05:26Z
Polas
1
moved [[Tutorial:gettingStarted]] to [[Tutorial - Hello world]]
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
ce1d0f35abf4a1aa7e96c97e80788f1615d063f9
1163
1162
2013-01-14T14:07:51Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
84011e3b0af4dc44dbf758a47b746e94841d4dd4
1164
1163
2013-01-14T14:17:05Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorial]]
d7fe24fbf63521a5dfabe9fc7863e2ee78997ef4
1165
1164
2013-01-14T14:22:51Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials]]
b988370960d2232112b43486b89e61ca80659104
1166
1165
2013-01-14T15:19:18Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine.
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
b5d31485f968cee1ee0c9ecee56fdd0ad83f5711
Tutorial:gettingStarted
0
215
1174
2013-01-14T14:05:26Z
Polas
1
moved [[Tutorial:gettingStarted]] to [[Tutorial - Hello world]]
wikitext
text/x-wiki
#REDIRECT [[Tutorial - Hello world]]
85fbc14874a26f0ed9ff198aa41fd7d659324dc2
Template:News
10
209
1133
1132
2013-01-14T14:09:55Z
Polas
1
wikitext
text/x-wiki
* The [[Tutorial - Hello world|Hello world]] tutorial added which illustrates how to get started with the language
1c8783d14e2204c0befb3163bb163c486ae246e1
NAS-IS Benchmark
0
144
802
801
2013-01-14T14:20:12Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Examples]]
3670e90607e97c51e3e17153d9ba40ef61cf4e61
803
802
2013-01-14T14:20:29Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Example Codes]]
132e8c796102d1607c6e1853562d5611f294b562
804
803
2013-01-14T14:22:17Z
Polas
1
moved [[NPB]] to [[NAS-IS Benchmark]]
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Notes ==
Be aware, this version of the code requires at least version 0.5 of Mesham and version 0.2 of the runtime library. The benchmark will not work with the version 0.41(b) release which is also on the website.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Example Codes]]
132e8c796102d1607c6e1853562d5611f294b562
Mandelbrot
0
135
737
736
2013-01-14T14:21:02Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
var pnum:=4; // number of processes to run this on
var hxres:=1000;
var hyres:=1000;
var magnify:=1;
var itermax:=1000;
var pixel:record["r",Int,"g",Int,"b",Int];
var mydata:array[pixel,hxres,hyres] :: allocated[row[] :: horizontal[pnum] :: single[evendist[]]];
var s:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1
{
var hy;
for hy from (mydata#p).low to (mydata#p).high
{
var hx;
for hx from 1 to hxres
{
var cx:=((((hx % hxres) - 0.5) % magnify) * 3) - 0.7;
var cy:=((((hy + (mydata#p).start) % hyres) - 0.5) % magnify) * 3;
var x:Double;
x:=0;
var y:Double;
y:=0;
var iteration;
var ts:=0;
for iteration from 1 to itermax
{
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100)
{
ts:=iteration;
iteration:=999999;
};
};
var red:=0;
var green:=0;
var blue:=0;
if (iteration > 999998)
{
blue:=(ts * 10) + 100;
red:=(ts * 3) + 50;
green:=(ts * 3)+ 50;
if (ts > 25)
{
blue:=0;
red:=(ts * 10);
green:=(ts * 5);
};
if (blue > 255) blue:=255;
if (red > 255) red:=255;
if (green > 255) green:=255;
};
(((mydata#p)#hy)#hx).r:=red;
(((mydata#p)#hy)#hx).g:=green;
(((mydata#p)#hy)#hx).b:=blue;
};
};
};
s:=mydata;
proc 0
{
var fname:="picture.ppm";
var fil:=openfile[fname,"w"]; // open file
// generate picture file header
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,1000];
writetofile[fil," "];
writetofile[fil,1000];
writetofile[fil,"\\n255\\n"];
// now write data into the file
var j;
for j from 0 to hyres - 1
{
var i;
for i from 0 to hxres - 1
{
var f:=((s#j)#i).r;
writechartofile[fil,f];
f:=((s#j)#i).g;
writechartofile[fil,f];
f:=((s#j)#i).b;
writechartofile[fil,f];
};
};
closefile[fil];
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
[[Category:Example Codes]]
1640c850d2cff507f713c9ce51c049a2110c0227
Image processing
0
142
785
784
2013-01-14T14:21:18Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
var complex : record["r",Float,"i",Float];
var n:=256; // image size
var m:=4; // number of processors
function void main[]
{
var a:array[complex,n,n] :: allocated[row[] :: single[on[0]]];
var s:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var s2:array[complex,n,n] :: allocated[col[] :: horizontal[m] :: single[evendist[]]];
var s3:array[complex,n,n] :: allocated[row[] :: horizontal[m] :: single[evendist[]] :: share[s2]];
proc 0
{
var orig:="clown.ppm";
loadfile[orig,a];
moveorigin[a];
};
s:=a;
var sin:array[complex,n % 2] :: allocated[row[]::multiple[]];
computesin[sin];
var p;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
filter[a];
invert[a];
};
s:=a;
par p from 0 to m - 1
{
var i;
for i from (s#p).low to (s#p).high FFT[((s#p)#i),sin];
};
s2:=s;
par p from 0 to m - 1
{
var i;
for i from (s3#p).low to (s3#p).high FFT[((s3#p)#i),sin];
};
a:=s3;
proc 0
{
moveorigin[a];
descale[a];
var res:="result.ppm";
writefile[res,a];
};
};
function void computesin[var sinusoid]
{
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var j;
for j from 0 to (n % 2) - 1
{
var topass:Float :: allocated[multiple[]];
topass:=((2 * pi[] * j) % n);
(sinusoid#j).i:=negsin[topass];
(sinusoid#j).r:=cos[topass];
};
};
function void FFT[var data, var sinusoid]
{
data : array[complex,n] :: allocated[row[] :: multiple[]];
sinusoid:array[complex, n % 2] :: allocated[row[] :: multiple[]];
var i2:=log[n];
bitreverse[data,n]; // data decomposition
var increvec;
for increvec from 2 to n // loops to log n stages
{
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec % 2) - 1) // for each frequency spectra in stage
{
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 // do butterfly for each point in the spectra
(
var f0:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).r)
- ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).i);
var f1:=((data#(i0 + i1 + (increvec % 2))).r * (sinusoid#(i0 << i2)).i)
+ ((data#(i0 + i1 + (increvec % 2))).i * (sinusoid#(i0 << i2)).r);
(data#(i0 + i1 + (increvec % 2))).r:=(data#(i0 + i1)).r - f0;
(data#(i0 + i1 + (increvec % 2))).i:=(data#(i0 + i1)).i - f1;
(data#(i0 + i1)).r := (data#(i0 + i1)).r + f0;
(data#(i0 + i1)).i := (data#(i0 + i1)).i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void writefile[var thename:String, var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[thename,"w"];
writetofile[fil,"P6\\n# CREATOR: LOGS Program\\n"];
writetofile[fil,n];
writetofile[fil," "];
writetofile[fil,n];
writetofile[fil,"\\n255\\n"];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var f:=((data#i)#j).r;
writechartofile[fil,f];
writechartofile[fil,f];
writechartofile[fil,f];
};
};
closefile[fil];
};
function void loadfile[var name,var data]
{
name : String :: allocated[multiple[]];
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var fil:=openfile[name,"r"];
readline[fil];
readline[fil];
readline[fil];
readline[fil];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var red:=readchar[fil];
var green:=readchar[fil];
var blue:=readchar[fil];
((data#i)#j).r:=toInt[red];
((data#i)#j).i:=toInt[red];
};
};
closefile[fil];
};
function Int lowpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] < 225) return 1;
return 0;
};
function Int highpass[var i, var j]
{
i:Int :: allocated[multiple[]];
j:Int :: allocated[multiple[]];
var val:=sqr[i] + sqr[j];
if (sqrt[val] > 190) return 1;
return 0;
};
function void filter[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * lowpass[i,j];
((data#i)#j).i:=((data#i)#j).i * lowpass[i,j];
};
};
};
function void moveorigin[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
((data#i)#j).r:=((data#i)#j).r * pow[-1,(i + j)];
((data#i)#j).i:=((data#i)#j).i * pow[-1,(i + j)];
};
};
};
function void descale[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var xnumy:=((data#i)#j).r;
xnumy:=xnumy % (n * n);
((data#i)#j).r:=xnumy;
xnumy:=((data#i)#j).i;
xnumy:=neg[xnumy % (n * n)];
((data#i)#j).i:=xnumy;
};
};
};
function void invert[var data]
{
data : array[complex,n,n] :: allocated[row[] :: multiple[]];
var i;
for i from 0 to n - 1
{
var j;
for j from 0 to n - 1
{
var t:=((data#i)#j).i;
((data#i)#j).i:=neg[t];
};
};
};
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
[[Category:Example Codes]]
ab40b7d7c6e7026068d59ccb5d94b076757da498
Prefix sums
0
137
749
748
2013-01-14T14:21:26Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
function void main[var arga,var argb]
{
var m:=10;
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var mine:Int;
mine:= randomnumber[0,toInt[argb#1]];
var i;
for i from 0 to m - 1
{
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print[p," = ",a,"\n"];
};
};
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
[[Category:Example Codes]]
41417773d166b1ca8ad95430c170e13567071486
Dartboard PI
0
139
760
759
2013-01-14T14:21:34Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
var m:=10; // number of processes
var pi:array[Double,m,1]:: allocated[row[] :: horizontal[m] :: single[evendist[]]];
var result:array[Double,m] :: allocated[single[on[0]]];
var mypi:Double;
mypi:=0;
var p;
par p from 0 to m - 1
{
var darts:=1000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i:=0;
for i from 0 to rounds
{
mypi:= mypi + (4 * (throwdarts[darts] % darts));
};
((pi#p)#0):=(mypi % rounds);
};
result:=pi;
proc 0
{
var avepi:Double;
avepi:=0;
var j:=0;
for j from 0 to m - 1
{
var y:=(result#j);
avepi:=avepi + y;
};
avepi:=avepi % m;
print["PI = ",avepi,"\n"];
};
function Int throwdarts[var darts]
{
darts: Int :: allocated[multiple[]];
var score:=0;
var n:=0;
for n from 0 to darts
{
var r:=randomnumber[0,1]; // random number between 0 and 1
var xcoord:=(2 * r) - 1;
r:=randomnumber[0,1]; // random number between 0 and 1
var ycoord:=(2 * r) - 1;
if ((sqr[xcoord] + sqr[ycoord]) < 1)
{
score:=score + 1; // hit the dartboard!
};
};
return score;
};
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here]
[[Category:Example Codes]]
a2713f1f003907414e79200eacf2a1f5ec7c85c3
Prime factorization
0
140
769
768
2013-01-14T14:21:42Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
var n:=976; // this is the number to factorize
var m:=12; // number of processes
var s:Int :: allocated[multiple[]];
var p;
par p from 0 to m - 1
{
var k:=p;
var divisor;
var quotient:Int;
while (n > 1)
{
divisor:= getprime[k];
quotient:= n % divisor;
var remainder:= mod[n,divisor];
if (remainder == 0)
{
n:=quotient;
} else {
k:=k + m;
};
(s :: allreduce["min"]):=n;
if ((s==n) && (quotient==n))
{
print[divisor,","];
};
n:=s;
};
};
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here]
[[Category:Example Codes]]
db5f3fb7f3f61232bb81a138fe1c065c44a77fc0
NPB
0
216
1176
2013-01-14T14:22:17Z
Polas
1
moved [[NPB]] to [[NAS-IS Benchmark]]
wikitext
text/x-wiki
#REDIRECT [[NAS-IS Benchmark]]
b13d3afac8c6047488d01f48483a9ea039fc6b11
Category:Example Codes
14
217
1178
2013-01-14T14:27:33Z
Polas
1
Created page with '[[Category:Example Codes]]'
wikitext
text/x-wiki
[[Category:Example Codes]]
be25ee476bec88e5cbfd3d1c572824157578c87c
1179
1178
2013-01-14T14:27:44Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Tutorials
14
218
1181
2013-01-14T14:28:09Z
Polas
1
Created page with '[[Tutorials]]'
wikitext
text/x-wiki
[[Tutorials]]
66cf4c4798628ec70b2574ef4c36bb98b8ef8395
1182
1181
2013-01-14T14:28:14Z
Polas
1
Blanked the page
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Type Oriented Programming Concept
0
153
841
840
2013-01-14T14:45:43Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Type Oriented Programming ==
Much work has been done investigating programming paradigms. Common paradigms include imperative, functional, object oriented and aspect oriented. However, we have developed the idea of type oriented programming. Taking the familiar concept of a type we have associated in depth runtime semantics with such, so that the behaviour of variable usage (i.e. access and assignment) can be determined by analysing the specific type. In many languages there is the requirement to combine a number of attributes with a variable, to this end we allow for the programmer to combine types together to form a supertype (type chain.)
== Type Chains ==
A type chain is a collection of types, combined together by the programmer. It is this type chain that will determine the behaviour of a specific variable. Precidence in the type chain is from right to left (i.e. the last added type will override behaviour of previously added types.) This precidence allows for the programmer to add additonal information, either perminantly or for a specific expression, as the code progresses.
== Type Variables ==
Type variables are an interesting concept. Similar to normal program variables they are declared to hold a type chain. Throughout program execution they can be dealt with like normal program variables and can be checked via conditionals, assigned and modified.
== Advantages of the Approach ==
There are a number of advantages to type oriented programming
* Efficiency - The rich amount of information allows the compiler to perform much static analysis and optimisation resulting in increased efficiency.
* Simplicity - By providing a clean type library the programmer can use well documented types to control many aspects of their code.
* Simpler language - By taking the majority of the language complexity away and placing it into a loosely coupled type library, the language is simplier from a design and implementation (compiler's) point of view. Adding numerous language keywords often results in a brittle design, using type oriented programming this is avoided
* Maintainability - By changing the type one can have considerable effect on the semantics of code, by abstracting the programmer away this makes the code simpler, more flexible and easier to maintain.
== Why use it in HPC ==
Current parallel languages all suffer from the simplicity vs efficiency compromise. By abstracting the programmer away from the low level details gives them a simple to use language, yet the high level of information provided to the compiler allows for much analysis to be performed during the compilation phase. From low level languages (such as C) it is difficult for the compiler to understand how the programmer is using parallelism, hence the optimisation of such code is limited.
We provide the programmer with the choice between explicit and implicit programming - they can rely on the inbuild, safe, language defaults or alternatively use additional types to elicit more control (and performance.) Therefore the language is acceptable to both the novice and expert parallel programmer.
== Other uses ==
* GUI Programming - GUI programming can be quite tiresome and repetative (hence the use of graphical design IDEs.) By using types this would abstract the programmer from many of the repetative issues.
* Retrofit Existing Languages - The type approach could be applied to existing languages where a retrofit could be undertaken, keeping the programmer in their comfort zone but also giving them the power of type oriented programming.
* Numerous Type Systems - The type system is completely separate from the actual language, it would be possible to provide a number of type systems for a single language, such as a ''parallel'' system, a ''sequential'' system etc...
e0093013db289d3ab010fb2fb105838a02beb26b
Template:Introduction
10
10
48
47
2013-01-14T14:45:51Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type Oriented Programming Concept|Type Oriented Programming]]
6eb5ed2c9188e09ee9ccb34fe7d58c6418a9a941
49
48
2013-01-14T14:47:08Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type Oriented Programming Concept|Type Oriented Programming]]
*[[Category:Tutorials|Mesham Tutorials]]
*[[Category:Examples|Example Codes]]
4ad6e9d9db0a6cae8b73e5d7f50afffd12448a9b
50
49
2013-01-14T14:48:33Z
Polas
1
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type Oriented Programming Concept|Type Oriented Programming]]
*[[:Category:Tutorials|Mesham Tutorials]]
*[[:Category:Example Codes|Example Codes]]
2ddc26f38cee1d46cc06b7a785c0e5fbe9db8bc7
Mesham
0
5
24
23
2013-01-14T14:51:42Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 66%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= News|title= Latest developments}}
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 50%; vertical-align: top;" |
{{Box|subject= Documentation|title= Documentation}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
{{Box|subject= Examples|title= In Code}}
|}
| style="padding: 0 0 0 10px; width: 33%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Introduction|title= Quick start}}
{{Box|subject= Downloads|title= Downloads}}
|}
74f459ba9099345856d3223cb2f5fee65f3d9184
25
24
2013-01-14T14:51:51Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 66%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= News|title= Latest developments}}
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 50%; vertical-align: top;" |
{{Box|subject= Documentation|title= Documentation}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
{{Box|subject= Examples|title= In code}}
|}
| style="padding: 0 0 0 10px; width: 33%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Introduction|title= Quick start}}
{{Box|subject= Downloads|title= Downloads}}
|}
54cf603ea2f185ff2ceb70e4d17b6b74120b70fb
Template:Examples
10
12
70
69
2013-01-14T14:53:41Z
Polas
1
wikitext
text/x-wiki
*Selected tutorials
**[[Tutorial - Hello world|Hello world]]
**[[:Category:Tutorials|'''All tutorials''']]
*Selected codes
**[[Mandelbrot]]
**[[Image_processing|Image Processing]]
**[[Dartboard_PI|Dartboard method find PI]]
**[[:Category:Example Codes|'''All codes''']]
4153262dc6d4a860401a7b9293fc80a6a2845c9f
Tutorial - Simple Types
0
219
1184
2013-01-14T16:00:35Z
Polas
1
Created page with '== Introduction == In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed …'
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
08f82315df9ae8cf42550e3046e0016990af423a
1185
1184
2013-01-14T16:06:29Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
6766602955e51946d7544d2e494025ec1214767d
1186
1185
2013-01-14T16:25:21Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type.
9204f832de640111ff3c88dd2f7325e251a71f3a
1187
1186
2013-01-14T16:41:44Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::stack::onesided::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
eee66fc15b14c18388d3dd7d7ab73f474ffc8803
1188
1187
2013-01-14T16:42:50Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::stack::onesided::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
[[:Category:Tutorials|Simple Types]]
b8e1c226bfceaee795ab5ee90e57103606ed0f43
1189
1188
2013-01-14T16:43:22Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::stack::onesided::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
[[Category:Tutorials|Simple Types]]
874459ddaeb7da32c208ce376bed7e0bced9f75c
1190
1189
2013-01-14T16:49:34Z
Polas
1
/* Further parallelism */
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
[[Category:Tutorials|Simple Types]]
96f0cfe952decb38da861e7c8acaca63696b2694
Tutorial - Simple Types
0
219
1191
1190
2013-01-14T17:08:08Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
[[Category:Tutorials|Simple Types]]
f979e772040b8694f2f468bb661eb57843985762
1192
1191
2013-01-14T17:10:43Z
Polas
1
/* Changing the type */
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
21c81e0ab3093885878337015bf41986bb9546f5
1193
1192
2013-01-14T17:14:15Z
Polas
1
/* Type chains */
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
21c28902358e1f21254e23606d6422340d783017
Template:News
10
209
1134
1133
2013-01-14T17:15:56Z
Polas
1
wikitext
text/x-wiki
* Two new tutorials added - the [[Tutorial - Hello world|Hello world]] and [[Tutorial - Simple Types|Simple types]] tutorials which consider the very basics of Mesham
4ed2ad06944fe8d7d405e1af5cfb985fe4851cb6
1135
1134
2013-01-15T16:53:29Z
Polas
1
wikitext
text/x-wiki
* Two more tutorials added - [[Tutorial - Functions|Functions]] and [[Tutorial - Parallel Constructs|Parallel Constructs]]
* Two new tutorials added - the [[Tutorial - Hello world|Hello world]] and [[Tutorial - Simple Types|Simple types]] tutorials which consider the very basics of Mesham
f2631cf5f01b10560713e0c2faae649f4f9cf98f
Template:Examples
10
12
71
70
2013-01-14T17:16:26Z
Polas
1
wikitext
text/x-wiki
*Selected tutorials
**[[Tutorial - Hello world|Hello world]]
**[[Tutorial - Simple Types|Simple Types]]
**[[:Category:Tutorials|'''All tutorials''']]
*Selected codes
**[[Mandelbrot]]
**[[Image_processing|Image Processing]]
**[[Dartboard_PI|Dartboard method find PI]]
**[[:Category:Example Codes|'''All codes''']]
e669369fe1e21ed4358be2c4a483268fd10713b4
72
71
2013-01-15T16:54:03Z
Polas
1
wikitext
text/x-wiki
*Selected tutorials
**[[Tutorial - Hello world|Hello world]]
**[[Tutorial - Simple Types|Simple Types]]
**[[Tutorial - Functions|Functions]]
**[[Tutorial - Parallel Constructs|Parallel Constructs]]
**[[:Category:Tutorials|'''All tutorials''']]
*Selected codes
**[[Mandelbrot]]
**[[Image_processing|Image Processing]]
**[[Dartboard_PI|Dartboard method find PI]]
**[[:Category:Example Codes|'''All codes''']]
0e4c823d4bce49c6e26b77af94746906ea2da1c6
Tutorial - Functions
0
220
1200
2013-01-15T12:35:00Z
Polas
1
Created page with '== Introduction == In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very use…'
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that there is an error now? This is because we are attempting to use this function in the declaration of variable ''c'' and will infer the type from the function. If you wish to do this then the function must appear before that point in the code but if we just wanted to use the function in any other way then it can appear in any order. As an exercise place the ''myAddFunction'' after the ''main'' function and then explicitly type ''c'' to be an integer and on the following line assign the value of ''c'' to be the result of a call to the function - see that it now works fine. As a further exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other.
c68bee234152a050a1c158e40ff42c3b7ab594bd
1201
1200
2013-01-15T12:59:20Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that there is an error now? This is because we are attempting to use this function in the declaration of variable ''c'' and will infer the type from the function. If you wish to do this then the function must appear before that point in the code but if we just wanted to use the function in any other way then it can appear in any order. As an exercise place the ''myAddFunction'' after the ''main'' function and then explicitly type ''c'' to be an integer and on the following line assign the value of ''c'' to be the result of a call to the function - see that it now works fine. As a further exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
4fc77ca66a2d1d8ced3b1f963cf5f8b39bfaa3a9
1202
1201
2013-01-15T13:05:44Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that there is an error now? This is because we are attempting to use this function in the declaration of variable ''c'' and will infer the type from the function. If you wish to do this then the function must appear before that point in the code but if we just wanted to use the function in any other way then it can appear in any order. As an exercise place the ''myAddFunction'' after the ''main'' function and then explicitly type ''c'' to be an integer and on the following line assign the value of ''c'' to be the result of a call to the function - see that it now works fine. As a further exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
== Leaving a function ==
Regardless of whether we are returning data from a function or not, we can use the [[Return|return]] statement on its own to force leaving that function.
function void myTestFunction(var b:Int) {
if (b==2) return;
};
In the above code if variable ''b'' has a value of ''2'' then we will leave the function early. Note that we have not followed the conditional by an explicit block - this is allowed (as in many languages) for a single statement.
As an exercise add some value after the return statement so, for example, it reads something like like ''return 23;'' - now attempt to recompile and see that you get an error, because in this case we are attempting to return a value when the function's definition reports that it does no such thing.
bd634a643419ac8f90628c0e6073910f27f63757
1203
1202
2013-01-15T13:14:49Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable. We shall also take a look at how to provide optional command line arguments to some Mesham code.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that there is an error now? This is because we are attempting to use this function in the declaration of variable ''c'' and will infer the type from the function. If you wish to do this then the function must appear before that point in the code but if we just wanted to use the function in any other way then it can appear in any order. As an exercise place the ''myAddFunction'' after the ''main'' function and then explicitly type ''c'' to be an integer and on the following line assign the value of ''c'' to be the result of a call to the function - see that it now works fine. As a further exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
== Leaving a function ==
Regardless of whether we are returning data from a function or not, we can use the [[Return|return]] statement on its own to force leaving that function.
function void myTestFunction(var b:Int) {
if (b==2) return;
};
In the above code if variable ''b'' has a value of ''2'' then we will leave the function early. Note that we have not followed the conditional by an explicit block - this is allowed (as in many languages) for a single statement.
As an exercise add some value after the return statement so, for example, it reads something like like ''return 23;'' - now attempt to recompile and see that you get an error, because in this case we are attempting to return a value when the function's definition reports that it does no such thing.
== Command line arguments ==
The main function also supports the reading of command line arguments. By definition you can provide the main function with either no function arguments (as we have seen up until this point) or alternatively two arguments, the first an [[Int]] and the second an [[Array|array]] of [[String|Strings]].
#include <io>
#include <string>
function void main(var argc:Int, var argv:array[String]) {
var i;
for i from 0 to argc - 1 {
print(itostring(i)+": "+argv[i]+"\n");
};
};
Compile and run the above code, with no arguments you will just see the name of the program, if you now supply command line arguments (separated by a space) then these will also be displayed. There are a couple of general points to note about the code above. Firstly, the variable names ''argc'' and ''argv'' for the command line arguments are the generally accepted names to use - although you can call these variables what ever you want if you are so inclined.
Secondly notice how we only tell the [[Array|array]] type that is is a collection of [[String|Strings]] and not any information about its dimensions, this is allowed in a function argument's type as we don't always know the size, but will limit us to one dimension and stop any error checking from happening on the index bounds used to access elements. Lastly see how we are looping from 0 to ''argc - 1'', the [[For|for]] loop is inclusive of the bounds so ''argc'' were zero then one iteration would still occur which is not what we want here.
[[Category:Tutorials|Functions]]
119f5580dfc923574bdd1101736959da27e9311c
Tutorial - Parallel Constructs
0
221
1207
2013-01-15T14:51:56Z
Polas
1
Created page with '== Introduction == In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There wi…'
wikitext
text/x-wiki
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
b2fb00e3037605864860efe99088a49716c0e278
1208
1207
2013-01-15T15:23:10Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
This is a nice little code to help figure out what, for each process, is being run. You can further play with this code and tweak it as required. Broadly, we are declaring all the variables to be [[Int|Ints]] of zero value and then executing the code in the { } code block followed by the [[Print|print]] statement on all processes. Where it gets interesting is when we look at the behaviour inside the code block itself. The assignment ''i:=1'' is executed on all processes, sequentially composed with the rest of the code block, ''j:=1'' is executed just on process 0, whereas at the same time the value of 1 is written to variables ''z'' and ''m'' on process 1. Process 2 performs the assignment ''n:=1'' and lastly process 3 assigns 1 to variable ''t''. From this example you can understand how parallel composition will behave when unstructured like this - as an exercise add additional code blocks (via braces) and see how that changes the behaviour my specifying explicitly what code belongs where.
The first parallel composition will bind to the statement (or code block) immediately before it and then those after it - hence ''i:=1'' is performed on all processes but those sequentially composed statements after the parallel composition are performed just on one process. Incidentally, if we removed the { } braces around the unstructured parallel block, then the [[Print|print]] statement would just be performed on process 3 - if it is not clear why then have an experiment and reread this section to fully understand.
== Allocation inference ==
If we declare a variable to have a specific allocation strategy within a parallel construct then this must be compatible with the scope of that construct. For example:
function void main() {
group 1,3 {
var i:Int::allocated[multiple[]];
};
};
If you compile the following code, then it will work but you get the warning ''Commgroup type and process list inferred from multiple and parallel scope''. So what does this mean? Well we are selecting a [[Group|group]] of processes (in this case processes 1 and 3) and declaring variable ''i'' to be an [[Int]] allocated to all processes; however the processes not in scope (0 and 2) will never know of the existence of ''i'' and hence can never be involved with it in any way. Even worse, if we were to synchronise on ''i'' then it might cause deadlock on these other processes that have no knowledge of it. Therefore, allocating ''i'' to all processes is the wrong thing to do here. Instead, what we really want is to allocate ''i'' to a group of processes that in parallel scope using the [[Commgroup|commgroup]] type, and if omitted the compiler is clever enough the deduce this, put that behaviour in but warn the programmer that it has done so.
If you modify the type chain of ''i'' from ''Int::allocated[multiple[]]'' to ''Int::allocated[multiple[commgroup[]]]'' and recompile you will see a different warning saying that it has just inferred the process list from parallel scope (and not the type as that is already there.) Now change the type chain to read ''Int::allocated[multiple[commgroup[1,3]]]'' and recompile - see that there is no warning as we have explicitly specified the processes to allocate the variable to? It is up to you as a programmer and your style to decide whether you want to explicitly do this or put up with the compiler warnings.
So, what happens if we try to allocate variable ''i'' to some process that is not in parallel scope? Modify the type chain of ''i'' to read ''Int::allocated[multiple[commgroup[1,2]]]'' and recompile - you should see an error now that looks like ''Process 2 in the commgroup is not in parallel scope''. We have the same protection for the single type too:
function void main() {
group 1,3 {
var i:Int::allocated[single[on[0]]];
};
};
If you try to compile this code, then you will get the error ''Process 0 in the single allocation is not in parallel scope'' which is because you have attempted to allocate variable ''i'' to process 0 but this is not in scope so can never be done.
fc98a8fb2d39920ee250cf0d6637c3931c2e0ecf
1209
1208
2013-01-15T15:27:36Z
Polas
1
/* Allocation inference */
wikitext
text/x-wiki
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
This is a nice little code to help figure out what, for each process, is being run. You can further play with this code and tweak it as required. Broadly, we are declaring all the variables to be [[Int|Ints]] of zero value and then executing the code in the { } code block followed by the [[Print|print]] statement on all processes. Where it gets interesting is when we look at the behaviour inside the code block itself. The assignment ''i:=1'' is executed on all processes, sequentially composed with the rest of the code block, ''j:=1'' is executed just on process 0, whereas at the same time the value of 1 is written to variables ''z'' and ''m'' on process 1. Process 2 performs the assignment ''n:=1'' and lastly process 3 assigns 1 to variable ''t''. From this example you can understand how parallel composition will behave when unstructured like this - as an exercise add additional code blocks (via braces) and see how that changes the behaviour my specifying explicitly what code belongs where.
The first parallel composition will bind to the statement (or code block) immediately before it and then those after it - hence ''i:=1'' is performed on all processes but those sequentially composed statements after the parallel composition are performed just on one process. Incidentally, if we removed the { } braces around the unstructured parallel block, then the [[Print|print]] statement would just be performed on process 3 - if it is not clear why then have an experiment and reread this section to fully understand.
== Allocation inference ==
If we declare a variable to have a specific allocation strategy within a parallel construct then this must be compatible with the scope of that construct. For example:
function void main() {
group 1,3 {
var i:Int::allocated[multiple[]];
};
};
If you compile the following code, then it will work but you get the warning ''Commgroup type and process list inferred from multiple and parallel scope''. So what does this mean? Well we are selecting a [[Group|group]] of processes (in this case processes 1 and 3) and declaring variable ''i'' to be an [[Int]] allocated to all processes; however the processes not in scope (0 and 2) will never know of the existence of ''i'' and hence can never be involved with it in any way. Even worse, if we were to synchronise on ''i'' then it might cause deadlock on these other processes that have no knowledge of it. Therefore, allocating ''i'' to all processes is the wrong thing to do here. Instead, what we really want is to allocate ''i'' to a group of processes that in parallel scope using the [[Commgroup|commgroup]] type, and if omitted the compiler is clever enough the deduce this, put that behaviour in but warn the programmer that it has done so.
If you modify the type chain of ''i'' from ''Int::allocated[multiple[]]'' to ''Int::allocated[multiple[commgroup[]]]'' and recompile you will see a different warning saying that it has just inferred the process list from parallel scope (and not the type as that is already there.) Now change the type chain to read ''Int::allocated[multiple[commgroup[1,3]]]'' and recompile - see that there is no warning as we have explicitly specified the processes to allocate the variable to? It is up to you as a programmer and your style to decide whether you want to explicitly do this or put up with the compiler warnings.
So, what happens if we try to allocate variable ''i'' to some process that is not in parallel scope? Modify the type chain of ''i'' to read ''Int::allocated[multiple[commgroup[1,2]]]'' and recompile - you should see an error now that looks like ''Process 2 in the commgroup is not in parallel scope''. We have the same protection for the single type too:
function void main() {
group 1,3 {
var i:Int::allocated[single[on[0]]];
};
};
If you try to compile this code, then you will get the error ''Process 0 in the single allocation is not in parallel scope'' which is because you have attempted to allocate variable ''i'' to process 0 but this is not in scope so can never be done. Whilst we have been experimenting with the [[Group|group]] parallel construct, the same behaviour is true of all parallel structural constructs.
07458e29b0befff6ae7178558fd5cbc030d305a1
1210
1209
2013-01-15T15:40:05Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
This is a nice little code to help figure out what, for each process, is being run. You can further play with this code and tweak it as required. Broadly, we are declaring all the variables to be [[Int|Ints]] of zero value and then executing the code in the { } code block followed by the [[Print|print]] statement on all processes. Where it gets interesting is when we look at the behaviour inside the code block itself. The assignment ''i:=1'' is executed on all processes, sequentially composed with the rest of the code block, ''j:=1'' is executed just on process 0, whereas at the same time the value of 1 is written to variables ''z'' and ''m'' on process 1. Process 2 performs the assignment ''n:=1'' and lastly process 3 assigns 1 to variable ''t''. From this example you can understand how parallel composition will behave when unstructured like this - as an exercise add additional code blocks (via braces) and see how that changes the behaviour my specifying explicitly what code belongs where.
The first parallel composition will bind to the statement (or code block) immediately before it and then those after it - hence ''i:=1'' is performed on all processes but those sequentially composed statements after the parallel composition are performed just on one process. Incidentally, if we removed the { } braces around the unstructured parallel block, then the [[Print|print]] statement would just be performed on process 3 - if it is not clear why then have an experiment and reread this section to fully understand.
== Allocation inference ==
If we declare a variable to have a specific allocation strategy within a parallel construct then this must be compatible with the scope of that construct. For example:
function void main() {
group 1,3 {
var i:Int::allocated[multiple[]];
};
};
If you compile the following code, then it will work but you get the warning ''Commgroup type and process list inferred from multiple and parallel scope''. So what does this mean? Well we are selecting a [[Group|group]] of processes (in this case processes 1 and 3) and declaring variable ''i'' to be an [[Int]] allocated to all processes; however the processes not in scope (0 and 2) will never know of the existence of ''i'' and hence can never be involved with it in any way. Even worse, if we were to synchronise on ''i'' then it might cause deadlock on these other processes that have no knowledge of it. Therefore, allocating ''i'' to all processes is the wrong thing to do here. Instead, what we really want is to allocate ''i'' to a group of processes that in parallel scope using the [[Commgroup|commgroup]] type, and if omitted the compiler is clever enough the deduce this, put that behaviour in but warn the programmer that it has done so.
If you modify the type chain of ''i'' from ''Int::allocated[multiple[]]'' to ''Int::allocated[multiple[commgroup[]]]'' and recompile you will see a different warning saying that it has just inferred the process list from parallel scope (and not the type as that is already there.) Now change the type chain to read ''Int::allocated[multiple[commgroup[1,3]]]'' and recompile - see that there is no warning as we have explicitly specified the processes to allocate the variable to? It is up to you as a programmer and your style to decide whether you want to explicitly do this or put up with the compiler warnings.
So, what happens if we try to allocate variable ''i'' to some process that is not in parallel scope? Modify the type chain of ''i'' to read ''Int::allocated[multiple[commgroup[1,2]]]'' and recompile - you should see an error now that looks like ''Process 2 in the commgroup is not in parallel scope''. We have the same protection for the single type too:
function void main() {
group 1,3 {
var i:Int::allocated[single[on[0]]];
};
};
If you try to compile this code, then you will get the error ''Process 0 in the single allocation is not in parallel scope'' which is because you have attempted to allocate variable ''i'' to process 0 but this is not in scope so can never be done. Whilst we have been experimenting with the [[Group|group]] parallel construct, the same behaviour is true of all parallel structural constructs.
== Nesting parallelism ==
Is currently disallowed, whilst it can provide more flexibility for the programmer it makes for a more complex language from the designer and compiler writer point of view.
function void main() {
var p;
par p from 0 to 3 {
proc 0 {
skip;
};
};
};
If you compile the following code then it will result in the error ''Can not currently nest par, proc or group parallel blocks''.
== Parallelism in other functions ==
Up until this point we have placed our parallel constructs within the ''main'' function, but there is no specific reason for this.
#include <io>
function void main() {
a();
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile and run the following code then you will see that processes 1 and 3 display the message to standard output. An an exercise modify this code to include further functions which have their own parallel constructs in and call them from the ''main'' or your own functions.
An important point to bear in mind with this is that ''a'' is now a parallel function and there are some points to consider with this. Firstly, all parallel constructs ([[Par|par]], [[Proc|proc]] and [[Group|group]) are blocking calls - hence all processes must see these, so to avoid deadlock all processes must call the function ''a''. Secondly, as discussed in the previous section, remember how we disallow nested parallelism? Well we relax this restriction here '''but''' it is still not safe
#include <io>
function void main() {
var p;
par p from 0 to 3 {
a();
};
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile the following code then it will work, but you will get the warning ''It might not be wise calling a parallel function from within a parallel block''. Running the executable will result in the correct output, but changing a ''3'' to a ''2'' in the [[Par|par]] loop will result in deadlock. Therefore it is best to avoid this technique in practice.
[[Category:Tutorials|Parallel Constructs]]
53d49ae511477475683cda9f2244401a96993d10
Tutorial - Shared Memory
0
222
1213
2013-01-17T11:37:56Z
Polas
1
Created page with '== Introduction == In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to unders…'
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.)
846438f07c5ac961fad0ffaf30e089e28328e03f
1214
1213
2013-01-17T11:39:38Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
1c9d1b3aec12dc4fc96484004606e3b21c7f6350
1215
1214
2013-01-17T11:55:42Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
63ad7fb48a1274451730afb29b74b3ac96e91caa
1216
1215
2013-01-17T12:00:23Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
b5230390f584de3023e2a4ede6937a269d2be095
1217
1216
2013-01-17T12:00:53Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
[[Category:Tutorials|Shared Memory]]
eddb99aa7edf3789cdce6fa38edac9fc732579fc
Tutorial - Arrays
0
223
1225
2013-01-17T13:06:53Z
Polas
1
Created page with '== Introduction == An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall hav…'
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorials - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
b081562ed2b1498b0d062fd4288e48a93d12fa87
1226
1225
2013-01-17T13:07:17Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorials - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
d878d0c3c6230ba522cef493c3fc6a730a72a051
1227
1226
2013-01-17T14:00:45Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
== Row and column major ==
f8a9891b0e1377a563f1caa0fb8c9e0acb2eef70
1228
1227
2013-01-17T14:06:54Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
2ad3790da646ed4b199bfc5af08b5cb6a1bd9eba
1229
1228
2013-01-17T14:26:08Z
Polas
1
/* Row and column major */
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
[[Category: Tutorials]]
f8752f20b3022475965f809366f3565f2cbe96e5
1230
1229
2013-01-17T14:26:19Z
Polas
1
/* Row and column major */
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
[[Category: Tutorials|Arrays]]
5ccb1b6874fc93f74a07972b45f6f89a875ca064
1231
1230
2013-01-17T15:55:49Z
Polas
1
/* Row and column major */
wikitext
text/x-wiki
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
16c017ca06ff2796c7596e0e9fdd457ba6c7b01e
Tutorial - Parallel Types
0
224
1237
2013-01-18T17:32:19Z
Polas
1
Created page with '== Introduction == Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have…'
wikitext
text/x-wiki
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=1;
var slave:=2;
if (i%2!=0) {
master:=2;
slave:=1;
};
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]].
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
ee835eb8aab4daf50a5374f6b2703c96c25e900c
1238
1237
2013-01-18T17:50:12Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=1;
var slave:=2;
if (i%2!=0) {
master:=2;
slave:=1;
};
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
[[Category:Tutorials|Parallel Types]]
c2183984f5f4085b6aa9b4961f10e527dc70083b
Mandelbrot
0
135
738
737
2013-01-18T17:57:12Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Float;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
[[Category:Example Codes]]
b2e160076d2727b352bff63ff2e927bf2c37a281
739
738
2013-01-18T17:57:43Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Float;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
''This code is compatible with Mesham version 1.0 and later''
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here]
[[Category:Example Codes]]
868e4b25e56a9413c125cdc18f8c2d77f12b5186
740
739
2013-01-18T18:00:58Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Float;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
''This code is compatible with Mesham version 1.0 and later''
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here] or a legacy Mesham 0.5 version [[[http://www.mesham.com/downloads/mandle-0.5.mesh here]
[[Category:Example Codes]]
747421a33d6f65a301d2870099f5131b235225c1
741
740
2013-01-18T18:01:10Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Float;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
''This code is compatible with Mesham version 1.0 and later''
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here] or a legacy Mesham 0.5 version [http://www.mesham.com/downloads/mandle-0.5.mesh here]
[[Category:Example Codes]]
abe2c6d96ff86185f9c411c20657d5a501e17222
Image processing
0
142
786
785
2013-01-18T18:14:55Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=sin(topass);
sinusoid[j].i:=-sinusoid[j].i;
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
var xnumy:Double;
xnumy:=data[i][j].i;
xnumy:=xnumy / (n * n);
data[i][j].i:=-xnumy;
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var f2:Double;
var f3:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
f2:=data[i0 + i1].r;
f3:=data[i0 + i1].i;
data[i0 + i1 + (increvec / 2)].r:= f2- f0;
data[i0 + i1 + (increvec / 2)].i:=f3 - f1;
data[i0 + i1].r := f2 + f0;
data[i0 + i1].i := f3 + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < 225) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > 190) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
[[Category:Example Codes]]
990c8dbb55f34863f6619bee16b29cdbe4a1fcf8
787
786
2013-01-18T18:15:15Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=sin(topass);
sinusoid[j].i:=-sinusoid[j].i;
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
var xnumy:Double;
xnumy:=data[i][j].i;
xnumy:=xnumy / (n * n);
data[i][j].i:=-xnumy;
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var f2:Double;
var f3:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
f2:=data[i0 + i1].r;
f3:=data[i0 + i1].i;
data[i0 + i1 + (increvec / 2)].r:= f2- f0;
data[i0 + i1 + (increvec / 2)].i:=f3 - f1;
data[i0 + i1].r := f2 + f0;
data[i0 + i1].i := f3 + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < 225) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > 190) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here]
[[Category:Example Codes]]
86ec56e0b92c86cdb412f035d6f5c4acaca6f7bb
788
787
2013-01-18T18:21:58Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=sin(topass);
sinusoid[j].i:=-sinusoid[j].i;
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
var xnumy:Double;
xnumy:=data[i][j].i;
xnumy:=xnumy / (n * n);
data[i][j].i:=-xnumy;
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var f2:Double;
var f3:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
f2:=data[i0 + i1].r;
f3:=data[i0 + i1].i;
data[i0 + i1 + (increvec / 2)].r:= f2- f0;
data[i0 + i1 + (increvec / 2)].i:=f3 - f1;
data[i0 + i1].r := f2 + f0;
data[i0 + i1].i := f3 + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < 225) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > 190) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here] there is also a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/fftimage-0.5.zip here]
There is also a simplified FFT code available [http://www.mesham.com/downloads/fft.mesh here] which the imaging processing was based upon.
[[Category:Example Codes]]
e57b472e20ebfa7574ed19961eaf28bb7bde50cc
Dartboard PI
0
139
761
760
2013-01-18T18:25:30Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var m:=64; // number of processes
function void main() {
var calculatedPi:array[Double,m]:: allocated[single[on[0]]];
var mypi:Double;
var p;
par p from 0 to m - 1 {
var darts:=10000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i;
for i from 0 to rounds - 1 {
mypi:=mypi + (4.0 * (throwdarts(darts) / darts));
};
mypi:=mypi / rounds;
calculatedPi[p]:=mypi;
};
sync;
proc 0 {
var avepi:Double;
var i;
for i from 0 to m - 1 {
avepi:=avepi + calculatedPi[i];
};
avepi:=avepi / m;
print(dtostring(avepi, "%.2f")+"\n");
};
};
function Int throwdarts(var darts:Int)
{
var score:=0;
var n:=0;
for n from 0 to darts - 1 {
var xcoord:=randomnumber(0,1);
var ycoord:=randomnumber(0,1);
if ((pow(xcoord,2) + pow(ycoord,2)) < 1.0) {
score++; // hit the dartboard!
};
};
return score;
};
''This code requires at least Mesham version 1.0''
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here]
[[Category:Example Codes]]
89076309d3e04479ab0712dbeec15465ad03f575
762
761
2013-01-18T18:26:24Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var m:=64; // number of processes
function void main() {
var calculatedPi:array[Double,m]:: allocated[single[on[0]]];
var mypi:Double;
var p;
par p from 0 to m - 1 {
var darts:=10000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i;
for i from 0 to rounds - 1 {
mypi:=mypi + (4.0 * (throwdarts(darts) / darts));
};
mypi:=mypi / rounds;
calculatedPi[p]:=mypi;
};
sync;
proc 0 {
var avepi:Double;
var i;
for i from 0 to m - 1 {
avepi:=avepi + calculatedPi[i];
};
avepi:=avepi / m;
print(dtostring(avepi, "%.2f")+"\n");
};
};
function Int throwdarts(var darts:Int)
{
var score:=0;
var n:=0;
for n from 0 to darts - 1 {
var xcoord:=randomnumber(0,1);
var ycoord:=randomnumber(0,1);
if ((pow(xcoord,2) + pow(ycoord,2)) < 1.0) {
score++; // hit the dartboard!
};
};
return score;
};
''This code requires at least Mesham version 1.0''
== Notes ==
An interesting aside is that we have used a function in this example, yet there is no main function. The throwdarts function will simulate throwing the darts for each round. As already noted in the language documentation, the main function is optional and without it the compiler will set the program entry point to be the start of the source code.
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here] a legacy version for Mesham 0.5 can be downloaded [http://www.mesham.com/downloads/pi-0.5.mesh here]
[[Category:Example Codes]]
7a7859c9d028c4e9f0be13d854a4376f0f52aaf2
763
762
2013-01-18T18:31:57Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var m:=64; // number of processes
function void main() {
var calculatedPi:array[Double,m]:: allocated[single[on[0]]];
var mypi:Double;
var p;
par p from 0 to m - 1 {
var darts:=10000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i;
for i from 0 to rounds - 1 {
mypi:=mypi + (4.0 * (throwdarts(darts) / darts));
};
mypi:=mypi / rounds;
calculatedPi[p]:=mypi;
};
sync;
proc 0 {
var avepi:Double;
var i;
for i from 0 to m - 1 {
avepi:=avepi + calculatedPi[i];
};
avepi:=avepi / m;
print(dtostring(avepi, "%.2f")+"\n");
};
};
function Int throwdarts(var darts:Int)
{
var score:=0;
var n:=0;
for n from 0 to darts - 1 {
var xcoord:=randomnumber(0,1);
var ycoord:=randomnumber(0,1);
if ((pow(xcoord,2) + pow(ycoord,2)) < 1.0) {
score++; // hit the dartboard!
};
};
return score;
};
''This code requires at least Mesham version 1.0''
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here] a legacy version for Mesham 0.5 can be downloaded [http://www.mesham.com/downloads/pi-0.5.mesh here]
[[Category:Example Codes]]
cb7b213b0f9d1db167403232f476c43fbb68c409
Prime factorization
0
140
770
769
2013-01-18T18:29:24Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
#include <io>
#include <string>
#include <maths>
var n:=976; // this is the number to factorize
var m:=10; // number of processes to use
var s:Int :: allocated[multiple[]];
function void main() {
var p;
par p from 0 to m - 1 {
var k:=p;
var divisor;
var quotient;
while (n > 1) {
divisor:= getprime(k);
quotient:= n / divisor;
var remainder:= n % divisor;
if (remainder == 0) {
n:=quotient;
} else {
k:=k + m;
};
s :: allreduce["min"]:=n;
if ((s==n) && (quotient==n)) {
print(itostring(divisor)+"\n");
};
n:=s;
};
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here]
[[Category:Example Codes]]
ab918a973252a3b6dca6d5c1a0361d3a53ab779f
771
770
2013-01-18T18:30:11Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
#include <io>
#include <string>
#include <maths>
var n:=976; // this is the number to factorize
var m:=10; // number of processes to use
var s:Int :: allocated[multiple[]];
function void main() {
var p;
par p from 0 to m - 1 {
var k:=p;
var divisor;
var quotient;
while (n > 1) {
divisor:= getprime(k);
quotient:= n / divisor;
var remainder:= n % divisor;
if (remainder == 0) {
n:=quotient;
} else {
k:=k + m;
};
s :: allreduce["min"]:=n;
if ((s==n) && (quotient==n)) {
print(itostring(divisor)+"\n");
};
n:=s;
};
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here] and a legacy version for Mesham 0.5 is also available [http://www.mesham.com/downloads/fact-0.5.mesh here]
[[Category:Example Codes]]
b73939d85a0373ef688ae63897e7a1035613cd1d
Prefix sums
0
137
750
749
2013-01-19T13:28:28Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var processes:=10;
function void main(var argc:Int,var argv:array[String]) {
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to processes - 1 {
var mine:Int; // Force to be an integer as randomnumber function defaults to double
mine:= randomnumber(0,toint(argv[1]));
var i;
for i from 0 to processes - 1 {
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print(itostring(p)+" "+itostring(mine)+" = "+itostring(a)+"\n");
};
};
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
[[Category:Example Codes]]
7317caa0fb7b760f0323fa2f2d29b5dbf91c492a
751
750
2013-01-19T13:28:46Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var processes:=10;
function void main(var argc:Int,var argv:array[String]) {
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to processes - 1 {
var mine:Int; // Force to be an integer as randomnumber function defaults to double
mine:= randomnumber(0,toint(argv[1]));
var i;
for i from 0 to processes - 1 {
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print(itostring(p)+" "+itostring(mine)+" = "+itostring(a)+"\n");
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
The function main has been included here so that the user can provide, via command line options, the range of the random number to find. The complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
[[Category:Example Codes]]
28fce7c787e3a1422c08ee4e0442f92d212ab024
752
751
2013-01-19T13:29:11Z
Polas
1
/* Notes */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var processes:=10;
function void main(var argc:Int,var argv:array[String]) {
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to processes - 1 {
var mine:Int; // Force to be an integer as randomnumber function defaults to double
mine:= randomnumber(0,toint(argv[1]));
var i;
for i from 0 to processes - 1 {
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print(itostring(p)+" "+itostring(mine)+" = "+itostring(a)+"\n");
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
The user can provide, via command line options, the range of the random number to find. The (relative) complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here]
[[Category:Example Codes]]
79d80c7cabb152f2327b52418bc6f6fe37416a79
753
752
2013-01-19T13:32:20Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var processes:=10;
function void main(var argc:Int,var argv:array[String]) {
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to processes - 1 {
var mine:Int; // Force to be an integer as randomnumber function defaults to double
mine:= randomnumber(0,toint(argv[1]));
var i;
for i from 0 to processes - 1 {
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print(itostring(p)+" "+itostring(mine)+" = "+itostring(a)+"\n");
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
The user can provide, via command line options, the range of the random number to find. The (relative) complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here] you can also download a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/prefix-0.5.mesh here]
[[Category:Example Codes]]
92a317726e47048ea81784c9c08ae0d23505b15f
The Arjuna Compiler
0
162
885
884
2013-01-19T14:46:37Z
Polas
1
moved [[The Compiler]] to [[The Arjuna Compiler]]
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
562198133541a2554eb259f16fc6bea9a8850aef
886
885
2013-01-19T14:47:50Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
''' This page refers to the [[Arjuna]] line of compilers which is up to version 0.5 and is legacy with respect to the latest [[Oubliette]] 1.0 line''
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
9d0272059fd28ab882195e8f14abd07e479687c8
887
886
2013-01-19T14:49:26Z
Polas
1
/* Overview */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
''' This page refers to the [[Arjuna]] line of compilers which is up to version 0.5 and is legacy with respect to the latest [[Oubliette]] 1.0 line'''
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
0d156f5cb49d5db27a4034700f9ea364b810ae48
Arjuna
0
175
934
933
2013-01-19T14:48:23Z
Polas
1
wikitext
text/x-wiki
[[File:mesham.gif|right]]
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
For more information about the Arjuna compiler then have a look at [[The_Arjuna_Compiler]]
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
85eadb4f4295256267fbaa96197c0f1900da65b4
935
934
2013-01-19T14:48:34Z
Polas
1
/* Technology */
wikitext
text/x-wiki
[[File:mesham.gif|right]]
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
For more information about the Arjuna compiler then have a look at [[The Arjuna Compiler]]
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
5be5010e3cbd65ae1b4f06ee8cc96eef4c8b42b8
The Compiler
0
225
1246
2013-01-19T15:03:12Z
Polas
1
Created page with '== Overview == The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an…'
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
af53a9c2586122b22a7481cf453054678597654d
1247
1246
2013-01-19T15:10:09Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
b8aa9f1efc53bec5da333357a2e470f3f62d623d
1248
1247
2013-01-19T15:24:02Z
Polas
1
/* Command line options */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
524b6ba8786422353768717990015f54b4dae346
1249
1248
2013-01-19T15:32:07Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
ef9653d8eb4476717406a8313e52535d27473802
1250
1249
2013-01-19T15:53:05Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
8251e5c36806f9b6a73d7ba1469f0361f28fdc4e
1251
1250
2013-01-19T15:53:49Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
9134d68b827cf5e67871de7facb71ba88e4a5577
1252
1251
2013-01-19T16:05:05Z
Polas
1
/* Compilation in more detail */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
dc4ba64eb4659dcf5faf1d219e8fc3bde4580d86
1253
1252
2013-01-19T17:31:30Z
Polas
1
/* Compilation in more detail */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|400px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
7b27278e7f3827704fed38100f81d5f17c2e3b95
1254
1253
2013-01-19T17:32:39Z
Polas
1
/* Compilation in more detail */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
dc4ba64eb4659dcf5faf1d219e8fc3bde4580d86
File:Meshamworkflow.png
6
226
1262
2013-01-19T15:52:18Z
Polas
1
Workflow of the oubliette Mesham compiler
wikitext
text/x-wiki
Workflow of the oubliette Mesham compiler
2f12daa92ed9113e2b742d63a1005e7a62142360
File:Oubliettelandscape.png
6
227
1264
2013-01-19T16:04:19Z
Polas
1
Oubliette landscape
wikitext
text/x-wiki
Oubliette landscape
6026efe464b9feb2efb09a99e374f9bc02b73847
Downloads
0
165
907
906
2013-01-19T16:22:32Z
Polas
1
wikitext
text/x-wiki
<metadesc>All the files provided for downloads involved with Mesham</metadesc>
''This page contains all the downloads available on this website''
== Compiler Files ==
These are the latest ([[Oubliette|oubliette) compiler files
== Legacy Arjuna Compiler Files ==
The [[Arjuna]] compiler line is legacy, but we have kept the downloads available in case people find them useful
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] '''legacy'''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] '''legacy'''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''legacy''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''legacy''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''legacy''
== Example Codes ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
4adb1b7849f733ac4ad82021da531ad72461b2fc
908
907
2013-01-19T16:22:48Z
Polas
1
wikitext
text/x-wiki
<metadesc>All the files provided for downloads involved with Mesham</metadesc>
''This page contains all the downloads available on this website''
== Compiler Files ==
These are the latest ([[Oubliette|oubliette]]) compiler files
== Legacy Arjuna Compiler Files ==
The [[Arjuna]] compiler line is legacy, but we have kept the downloads available in case people find them useful
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] ''legacy''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] ''legacy''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''legacy''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''legacy''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''legacy''
== Example Codes ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
cdc15dfcb937ded452be10145742bdedb201d9ca
909
908
2013-01-19T16:28:02Z
Polas
1
wikitext
text/x-wiki
<metadesc>All the files provided for downloads involved with Mesham</metadesc>
''This page contains all the downloads available on this website''
== Latest compiler ==
These are the latest ([[Oubliette|oubliette]]) compiler files
== Language specification ==
[http://www.mesham.com/downloads/specification1a3.pdf Mesham language specification 1.0a3]
== Legacy Arjuna compiler files ==
The [[Arjuna]] compiler line is legacy, but we have kept the downloads available in case people find them useful
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] ''legacy''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] ''legacy''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''legacy''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''legacy''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''legacy''
== Example codes ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
ca879a7ea1a179032f349badf862424dd7f70d46
Arjuna
0
175
936
935
2013-01-19T16:26:49Z
Polas
1
/* Download */
wikitext
text/x-wiki
[[File:mesham.gif|right]]
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
We also provide an earlier version (0.41b) which is the last released version to support the Windows operating system. That version can be downloaded [[Download_0.41_beta|here]] and the corresponding runtime library [[Download_rtl_0.1|here]].
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
For more information about the Arjuna compiler then have a look at [[The Arjuna Compiler]]
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
5ff5b5348b37f24f4083b5955d0e254d80e29f04
File:Oubliette.png
6
228
1266
2013-01-19T16:43:03Z
Polas
1
Oubliette Mesham logo
wikitext
text/x-wiki
Oubliette Mesham logo
f65cd21ac0c4b7ae1f8443f707ffdcd41ef126cb
1267
1266
2013-01-19T17:03:02Z
Polas
1
uploaded a new version of "[[File:Oubliette.png]]": Oubliette Mesham icon
wikitext
text/x-wiki
Oubliette Mesham logo
f65cd21ac0c4b7ae1f8443f707ffdcd41ef126cb
Download 1.0
0
229
1269
2013-01-19T17:19:29Z
Polas
1
Created page with '<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc> {{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Bro…'
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
== Download ==
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
775245b2dacea88cf850f39f9f81f1837efb650f
1270
1269
2013-01-19T17:45:17Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Installation Instructions ==
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''idaho'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
5c7b945a507e66c22c47709774d1a9492112453c
1271
1270
2013-01-19T17:46:06Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''idaho'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
66525afe4497fdce67c5cccbb9280669efc9acd7
1272
1271
2013-01-19T17:47:27Z
Polas
1
/* Introduction */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''idaho'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
b2babcd03ebe40dcc2bbc38a8f6548a4df600dfd
1273
1272
2013-01-19T18:09:03Z
Polas
1
/* Installation Instructions */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
b01a4fdf1a297c4f8ea6a3175c5cb2f50248b9d6
Oubliette
0
176
941
940
2013-01-19T17:20:17Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
10c1e296b87765fbbfe71b7b182b6b6735c01af0
Tutorial - Hello world
0
214
1167
1166
2013-01-19T17:48:09Z
Polas
1
wikitext
text/x-wiki
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
818396bf7abe38a347924d87a5125ca84b29f1b2
1168
1167
2013-01-19T18:21:13Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
== Introduction ==
'''Tutorial number 1'''
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
9296500d6530547f7a2bdbf745e82f0642dda8eb
1169
1168
2013-01-19T18:21:23Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
== Introduction ==
'''Tutorial number one'''
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
5ff7cf1aef6521cab006d18f5e9f555a43d0c936
1170
1169
2013-01-19T18:21:51Z
Polas
1
/* Introduction */
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
== Introduction ==
'''Tutorial number one''' - [[Tutorial_-_Simple_Types|next]]
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
2b4c6a242aac23c26fba532bc351c1488fb7c778
1171
1170
2013-01-19T18:24:06Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
'''Tutorial number one''' - [[Tutorial_-_Simple_Types|next]]
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
22f3321715e9f023d7b711f2bf254883b6a885fc
Template:Downloads
10
11
58
57
2013-01-19T17:49:30Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|All (''version 1.0.0_232'')]]
*[[Download_rtl_1.0|Runtime Library 1.0.0]]
<hr>
*[[Arjuna|Legacy versions]]
8e27bd5fe2fa4e4d8e62b45b8fe3198bd42dbea0
59
58
2013-01-19T17:50:23Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|All (''version 1.0.0_232'')]]
*[[Download_rtl_1.0|Runtime library 1.0.0]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
6fc3c62689684f2273aeb05055737f09f1167db4
60
59
2013-01-19T17:51:04Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_232'')]]
*[[Download_rtl_1.0|Runtime library 1.0.0]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
b40493385211386f98b034f489e23a33a6d1fb2a
File:Robot-cleaner.jpg
6
230
1292
2013-01-19T18:01:11Z
Polas
1
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
1293
1292
2013-01-19T18:02:39Z
Polas
1
uploaded a new version of "[[File:Robot-cleaner.jpg]]"
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Download libgc
0
231
1295
2013-01-19T18:03:48Z
Polas
1
Created page with '<metadesc>Mesham uses lib GC to garbage collect during execution, download it here</metadesc> {{Applicationbox|name=Lib GC 7.2|author=Hans Boehm|desc=Garbage collector library us…'
wikitext
text/x-wiki
<metadesc>Mesham uses lib GC to garbage collect during execution, download it here</metadesc>
{{Applicationbox|name=Lib GC 7.2|author=Hans Boehm|desc=Garbage collector library used by the Mesham runtime library.|url=http://www.hpl.hp.com/personal/Hans_Boehm/gc/|image=Robot-cleaner.jpg|version=7.2|released=May 2012}}
== Introduction ==
The default runtime library uses the Boehm-Demers-Weiser conservative garbage collector. It allows one to allocate memory, without explicitly deallocating it when it is no longer useful. The collector automatically recycles memory when it determines that it can no longer be otherwise accessed.
== Download ==
We provide a download link ''64 bit here'' and ''32 bit here'' to precompiled library versions of this which is all that is required to use Mesham. We suggest you use these provided, precompiled forms as they have been tested with Mesham. It is likely that future versions (later than 7.2) will work fine although they might not necessarily have been tested.
You can access further information, documentation and download the latest source code from the project website [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ here].
d6a724a5d29c8953b65d4111584a59fe4054dc3a
Download rtl 1.0
0
232
1299
2013-01-19T18:08:37Z
Polas
1
Created page with '<metadesc>Mesham type oriented parallel programming language runtime library</metadesc> {{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest…'
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.0|released=January 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''64 bit here''' and '''32 bit here'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
486e57beda8c6d9270a0ce965238c4df2c397c0b
Download rtl 0.2
0
159
874
873
2013-01-19T18:09:18Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Runtime library 0.2|author=[[User:polas|Nick Brown]]|desc=The runtime library required for Mesham 0.5.|url=http://www.mesham.com|image=Runtimelibrary.png|version=0.2|released=January 2010}}
''Please Note: This version of the runtime library is deprecated but required for [[Download_0.5|Mesham 0.5]]''
== Runtime Library Version 0.2 ==
Version 0.2 is a legacy version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
You can download the [http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2 here] (28KB)
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
c03a27f82ed564f4f3572a8f41b9f66c2ba12a65
Idaho
0
233
1306
2013-01-19T18:19:44Z
Polas
1
Created page with '[[File:Runtimelibrary.png|right]] == Introduction == Idaho is the name of the reengineered Mesham runtime library. We have always given parts of the language different nickname…'
wikitext
text/x-wiki
[[File:Runtimelibrary.png|right]]
== Introduction ==
Idaho is the name of the reengineered Mesham runtime library. We have always given parts of the language different nicknames and [[Oubliette]] is the name of the reengineered compiler that requires Idaho. The runtime library is used by a compiled executable whilst it is running and, apart from providing much of the lower level language functionality such as memory allocation, remote memory (communication) management and timing, it also provides the native functions which much of the standard function library requires.
We have designed the system in this manner such that platform specific behaviour can be contained within this library and the intention will be that a version of the library will exist for multiple platforms. Secondly by modifying the library it is possible to tune how the Mesham executables will run, such as changing the garbage collection strategy.
== API ==
The set of functions which Idaho provides can be viewed in the ''mesham.h'' header file. It is intended to release the source code when it is more mature.
a853ee3530087003fec2dbf296e395a0c53ade84
1307
1306
2013-01-19T18:20:13Z
Polas
1
wikitext
text/x-wiki
<metadesc>Idaho is the Mesham runtime library</metadesc>
[[File:Runtimelibrary.png|right]]
== Introduction ==
Idaho is the name of the reengineered Mesham runtime library. We have always given parts of the language different nicknames and [[Oubliette]] is the name of the reengineered compiler that requires Idaho. The runtime library is used by a compiled executable whilst it is running and, apart from providing much of the lower level language functionality such as memory allocation, remote memory (communication) management and timing, it also provides the native functions which much of the standard function library requires.
We have designed the system in this manner such that platform specific behaviour can be contained within this library and the intention will be that a version of the library will exist for multiple platforms. Secondly by modifying the library it is possible to tune how the Mesham executables will run, such as changing the garbage collection strategy.
== API ==
The set of functions which Idaho provides can be viewed in the ''mesham.h'' header file. It is intended to release the source code when it is more mature.
6fe35d4a01859fe88a13d16a03282e9ede6c4bb8
Tutorial - Simple Types
0
219
1194
1193
2013-01-19T18:23:11Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
== Introduction ==
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
e221fa24a720beda3c6c07cbf5c9c21694915a21
1195
1194
2013-01-19T18:23:30Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
1d9bbf0a71be263a16c1609101ecedf4e8f0737b
1196
1195
2013-01-19T18:23:48Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync a;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
5958826ae8571a4a2afdb4999131823714a5dfb1
Tutorial - Functions
0
220
1204
1203
2013-01-19T18:25:02Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of functions and functional abstraction in Mesham</metadesc>
'''Tutorial number three''' - [[Tutorial_-_Simple Types|prev]] :: [[Tutorial_-_Parallel Constructs|next]]
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable. We shall also take a look at how to provide optional command line arguments to some Mesham code.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that there is an error now? This is because we are attempting to use this function in the declaration of variable ''c'' and will infer the type from the function. If you wish to do this then the function must appear before that point in the code but if we just wanted to use the function in any other way then it can appear in any order. As an exercise place the ''myAddFunction'' after the ''main'' function and then explicitly type ''c'' to be an integer and on the following line assign the value of ''c'' to be the result of a call to the function - see that it now works fine. As a further exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
== Leaving a function ==
Regardless of whether we are returning data from a function or not, we can use the [[Return|return]] statement on its own to force leaving that function.
function void myTestFunction(var b:Int) {
if (b==2) return;
};
In the above code if variable ''b'' has a value of ''2'' then we will leave the function early. Note that we have not followed the conditional by an explicit block - this is allowed (as in many languages) for a single statement.
As an exercise add some value after the return statement so, for example, it reads something like like ''return 23;'' - now attempt to recompile and see that you get an error, because in this case we are attempting to return a value when the function's definition reports that it does no such thing.
== Command line arguments ==
The main function also supports the reading of command line arguments. By definition you can provide the main function with either no function arguments (as we have seen up until this point) or alternatively two arguments, the first an [[Int]] and the second an [[Array|array]] of [[String|Strings]].
#include <io>
#include <string>
function void main(var argc:Int, var argv:array[String]) {
var i;
for i from 0 to argc - 1 {
print(itostring(i)+": "+argv[i]+"\n");
};
};
Compile and run the above code, with no arguments you will just see the name of the program, if you now supply command line arguments (separated by a space) then these will also be displayed. There are a couple of general points to note about the code above. Firstly, the variable names ''argc'' and ''argv'' for the command line arguments are the generally accepted names to use - although you can call these variables what ever you want if you are so inclined.
Secondly notice how we only tell the [[Array|array]] type that is is a collection of [[String|Strings]] and not any information about its dimensions, this is allowed in a function argument's type as we don't always know the size, but will limit us to one dimension and stop any error checking from happening on the index bounds used to access elements. Lastly see how we are looping from 0 to ''argc - 1'', the [[For|for]] loop is inclusive of the bounds so ''argc'' were zero then one iteration would still occur which is not what we want here.
[[Category:Tutorials|Functions]]
784beaeb91bd2e135531dcd98397438433c90368
Tutorial - Parallel Constructs
0
221
1211
1210
2013-01-19T18:26:02Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing how to structure parallel code in Mesham</metadesc>
'''Tutorial number four''' - [[Tutorial_-_Functions|prev]] :: [[Tutorial_-_Shared Memory|next]]
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
This is a nice little code to help figure out what, for each process, is being run. You can further play with this code and tweak it as required. Broadly, we are declaring all the variables to be [[Int|Ints]] of zero value and then executing the code in the { } code block followed by the [[Print|print]] statement on all processes. Where it gets interesting is when we look at the behaviour inside the code block itself. The assignment ''i:=1'' is executed on all processes, sequentially composed with the rest of the code block, ''j:=1'' is executed just on process 0, whereas at the same time the value of 1 is written to variables ''z'' and ''m'' on process 1. Process 2 performs the assignment ''n:=1'' and lastly process 3 assigns 1 to variable ''t''. From this example you can understand how parallel composition will behave when unstructured like this - as an exercise add additional code blocks (via braces) and see how that changes the behaviour my specifying explicitly what code belongs where.
The first parallel composition will bind to the statement (or code block) immediately before it and then those after it - hence ''i:=1'' is performed on all processes but those sequentially composed statements after the parallel composition are performed just on one process. Incidentally, if we removed the { } braces around the unstructured parallel block, then the [[Print|print]] statement would just be performed on process 3 - if it is not clear why then have an experiment and reread this section to fully understand.
== Allocation inference ==
If we declare a variable to have a specific allocation strategy within a parallel construct then this must be compatible with the scope of that construct. For example:
function void main() {
group 1,3 {
var i:Int::allocated[multiple[]];
};
};
If you compile the following code, then it will work but you get the warning ''Commgroup type and process list inferred from multiple and parallel scope''. So what does this mean? Well we are selecting a [[Group|group]] of processes (in this case processes 1 and 3) and declaring variable ''i'' to be an [[Int]] allocated to all processes; however the processes not in scope (0 and 2) will never know of the existence of ''i'' and hence can never be involved with it in any way. Even worse, if we were to synchronise on ''i'' then it might cause deadlock on these other processes that have no knowledge of it. Therefore, allocating ''i'' to all processes is the wrong thing to do here. Instead, what we really want is to allocate ''i'' to a group of processes that in parallel scope using the [[Commgroup|commgroup]] type, and if omitted the compiler is clever enough the deduce this, put that behaviour in but warn the programmer that it has done so.
If you modify the type chain of ''i'' from ''Int::allocated[multiple[]]'' to ''Int::allocated[multiple[commgroup[]]]'' and recompile you will see a different warning saying that it has just inferred the process list from parallel scope (and not the type as that is already there.) Now change the type chain to read ''Int::allocated[multiple[commgroup[1,3]]]'' and recompile - see that there is no warning as we have explicitly specified the processes to allocate the variable to? It is up to you as a programmer and your style to decide whether you want to explicitly do this or put up with the compiler warnings.
So, what happens if we try to allocate variable ''i'' to some process that is not in parallel scope? Modify the type chain of ''i'' to read ''Int::allocated[multiple[commgroup[1,2]]]'' and recompile - you should see an error now that looks like ''Process 2 in the commgroup is not in parallel scope''. We have the same protection for the single type too:
function void main() {
group 1,3 {
var i:Int::allocated[single[on[0]]];
};
};
If you try to compile this code, then you will get the error ''Process 0 in the single allocation is not in parallel scope'' which is because you have attempted to allocate variable ''i'' to process 0 but this is not in scope so can never be done. Whilst we have been experimenting with the [[Group|group]] parallel construct, the same behaviour is true of all parallel structural constructs.
== Nesting parallelism ==
Is currently disallowed, whilst it can provide more flexibility for the programmer it makes for a more complex language from the designer and compiler writer point of view.
function void main() {
var p;
par p from 0 to 3 {
proc 0 {
skip;
};
};
};
If you compile the following code then it will result in the error ''Can not currently nest par, proc or group parallel blocks''.
== Parallelism in other functions ==
Up until this point we have placed our parallel constructs within the ''main'' function, but there is no specific reason for this.
#include <io>
function void main() {
a();
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile and run the following code then you will see that processes 1 and 3 display the message to standard output. An an exercise modify this code to include further functions which have their own parallel constructs in and call them from the ''main'' or your own functions.
An important point to bear in mind with this is that ''a'' is now a parallel function and there are some points to consider with this. Firstly, all parallel constructs ([[Par|par]], [[Proc|proc]] and [[Group|group]) are blocking calls - hence all processes must see these, so to avoid deadlock all processes must call the function ''a''. Secondly, as discussed in the previous section, remember how we disallow nested parallelism? Well we relax this restriction here '''but''' it is still not safe
#include <io>
function void main() {
var p;
par p from 0 to 3 {
a();
};
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile the following code then it will work, but you will get the warning ''It might not be wise calling a parallel function from within a parallel block''. Running the executable will result in the correct output, but changing a ''3'' to a ''2'' in the [[Par|par]] loop will result in deadlock. Therefore it is best to avoid this technique in practice.
[[Category:Tutorials|Parallel Constructs]]
0bb1bd17c7e11c7496a29db6d4112a6b4d7328e7
Tutorial - Shared Memory
0
222
1218
1217
2013-01-19T18:26:58Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
[[Category:Tutorials|Shared Memory]]
7582e962777eab8b7be0cf9f21aff38567da24a1
1219
1218
2013-01-19T18:28:14Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
b:=a;
};
sync b;
proc 1 {
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there) and then assign ''b'' to be the value of ''a''. We then [[Sync|synchronise]] based upon variable ''b'' and process one will display its value of ''b''. Stepping back a moment, what we are basically doing here is assigning a value to a variable allocated on all processes from one allocated on a single process. The result is that process zero will write the value of variable ''a'' into ''b'' on all processes (it is a broadcast.) If you remove the [[Sync|sync]] statement on line 11 then you will see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to update this remote value on process one from process zero.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
[[Category:Tutorials|Shared Memory]]
a183e743db560ca50293cad7f1f93b3aab42466f
Tutorial - Parallel Types
0
224
1239
1238
2013-01-19T18:27:34Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=1;
var slave:=2;
if (i%2!=0) {
master:=2;
slave:=1;
};
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
[[Category:Tutorials|Parallel Types]]
15d0726f6c07e750085300b70473c166f2d60852
Tutorial - Arrays
0
223
1232
1231
2013-01-19T18:28:00Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing collecting data together via arrays in Mesham</metadesc>
'''Tutorial number seven''' - [[Tutorial_-_Parallel Types|prev]]
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
b4e3b6b5ffea6b02812a825b27ab654238fe9fc6
Mesham parallel programming language:Copyrights
0
16
99
98
2013-01-19T18:31:10Z
Polas
1
wikitext
text/x-wiki
The intelectual property of the Mesham programming language, associated compilers, runtime language and documentation including example codes is owned by Nick Brown. It may be used and reproduced as per the creative commons licence terms but all ownership remains with the author.
The Lib GC compiler is owned by Hans Boehm and released under [http://www.hpl.hp.com/personal/Hans_Boehm/gc/license.txt licence]
8f88b8aa523b5a02d288fee7b2456c6635562d88
Mesham parallel programming language:About
0
234
1310
2013-01-19T18:32:07Z
Polas
1
Redirected page to [[What is Mesham]]
wikitext
text/x-wiki
#REDIRECT [[What_is_Mesham]]
46e26242036cdc74c7a0ac7260e0182e1951639d
Mesham parallel programming language:General disclaimer
0
235
1312
2013-01-19T18:37:16Z
Polas
1
Created page with '= No warranty of any kind = Mesham makes no guarantee of validity or safety of the information contained or copied from this site. This site contains source code, binary executa…'
wikitext
text/x-wiki
= No warranty of any kind =
Mesham makes no guarantee of validity or safety of the information contained or copied from this site. This site contains source code, binary executables and documentation which can be used to in the creation of source code. The information contained here is for research purposes and should be verified by yourself as accurate before use. Any software (source or binary) created by the information contained here or software located at this site has the following disclaimer:
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
We strongly advise that you virus check all downloaded software, regardless of origin, before use.
50218e4b44d8f738ad96a2c231859c72b7d5fdb3
Mesham parallel programming language:General disclaimer
0
235
1313
1312
2013-01-19T18:37:41Z
Polas
1
wikitext
text/x-wiki
= No warranty of any kind =
Mesham makes no guarantee of validity or safety of the information contained or copied from this site. This site contains source code, binary executables and documentation which can be used to in the creation of source code. The information contained here is for research purposes and should be verified by yourself as accurate before use. Any software (source or binary) created by the information contained here or software located at this site has the following disclaimer:
<pre>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
</pre>
We strongly advise that you virus check all downloaded software, regardless of origin, before use.
d8fac0de08466638c8b9e5710856387378662a4d
Mesham parallel programming language:Privacy policy
0
236
1315
2013-01-19T18:38:58Z
Polas
1
Created page with '=Privacy Policy= Where possible Mesham will attempt to respect your privacy. No information collected will be shared with third parties. This includes such data as server logs a…'
wikitext
text/x-wiki
=Privacy Policy=
Where possible Mesham will attempt to respect your privacy. No information collected will be shared with third parties. This includes such data as server logs and the information not publicly shared by authors and editors. Mesham is located in the United Kingdom, and may be required to compile with legal requests to identify people if they commit illegal activities on this site. Please, no wazes, virus writing, OS exploiting, or links to those types of activities. Please do not add you private information unless you are sure you want it shared as deleting content in the wiki does not permanently remove it. Do not post other peoples private information.
c87a32e1157c0c17605558ea52a9485d68e4afde
Download 1.0
0
229
1274
1273
2013-01-20T14:10:51Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.)
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
56ba545837563111d29a41977cbe9c09e774c0a6
1275
1274
2013-01-20T14:11:23Z
Polas
1
/* Prerequisites */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
fd8ee8ed48db2bbe854f7a6341536e37d4955768
1276
1275
2013-01-20T15:20:55Z
Polas
1
/* Introduction */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x64 and 32 bit x86 Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
90e9c3f985d99ed842db22bac0b0d5db7b33a45d
1277
1276
2013-01-20T15:21:13Z
Polas
1
/* Introduction */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with 64 and 32 bit x86 Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
92526badc414bd652190d0d571d19806df3263b7
1278
1277
2013-01-20T15:21:31Z
Polas
1
/* Introduction */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
50cd679eb9071a5f199cd96adb5f6a4f20ed6864
1279
1278
2013-01-20T15:23:45Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_232|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_232 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 19th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
07979f45a0a1ccd42782758dd39c16e8a4c602fb
1280
1279
2013-01-20T15:24:56Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_239|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''here''' and 32 bit '''here'''
* Latest compiler version: 1.0.0_239 released 20th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
09a96a984f634538e16df9222177650b807b94d4
1281
1280
2013-01-20T15:27:34Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_239|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_239 released 20th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
1f0f535298a60a560f9e777862ac2f2cd21fcd94
1282
1281
2013-01-20T15:47:47Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_241|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_241 released 20th January 2013 - download 64 bit '''here''' and 32 bit '''here'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
cfbf1cc950af001eb390a9e43e6913da66c7f604
1283
1282
2013-01-20T17:17:09Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_241|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_241 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''here''' and 32 bit '''here'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
3cf323dd01f2fe35bd15d872385ae4213c8f5411
1284
1283
2013-01-20T17:30:03Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_241|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_241 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
e7525c345f685f34454865d1de2b2153f40a3560
1285
1284
2013-02-23T18:34:48Z
Polas
1
/* Installation Instructions */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_241|released=January 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_241 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.0 released 20th January 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
c8aca5c6a9529cc842f7e983cc7cb829029bb49e
1286
1285
2013-03-08T15:41:23Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_299|released=March 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_299 released 8th March 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.01 released 8th March 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
803ad2a6ab50b074bef50f8f36e11ca1e3966788
Template:Downloads
10
11
61
60
2013-01-20T15:25:11Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_239'')]]
*[[Download_rtl_1.0|Runtime library 1.0.0]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
05be9a6dfc74b3408a7f9ddc0512167f8f8341e7
62
61
2013-01-20T15:48:24Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_241'')]]
*[[Download_rtl_1.0|Runtime library 1.0.0]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
70d114d46ef340c335737c5eaf29418404336ac6
63
62
2013-03-08T15:41:53Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_299'')]]
*[[Download_rtl_1.0|Runtime library 1.0.01]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
a1a2e6f2d7644908dcadc3b158d1df4ea4acc1e0
Download rtl 1.0
0
232
1300
1299
2013-01-20T17:18:07Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.0|released=January 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
d8833bb01e7fc54ee78047e85708d04a062616ec
1301
1300
2013-01-20T17:18:16Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.0|released=January 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
86d649ea39f1bf1d38650ba6f9137ef37d6d0a90
1302
1301
2013-03-08T15:53:14Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.01|released=March 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
44c1e897211bbc9834b03f63e5182b97c04563ba
Oubliette
0
176
942
941
2013-01-20T17:26:54Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
9a1b4c696e71b6ee2743be9bdc0e53d909dfab33
943
942
2013-01-31T17:07:20Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
71869a5c04a8eef0d4acb12f89cd92c45e457428
944
943
2013-02-07T12:13:28Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
704e035859940697d18e590d9536f71fe8bcba5a
945
944
2013-02-07T14:44:57Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
d3a83a5116186423299196394ee5a696a326a38a
946
945
2013-02-23T18:35:47Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Environment arguments provided to underlying C compiler for optimisation
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
2253803739e3b4877b413c2bd8d5548c34196123
947
946
2013-03-07T16:28:42Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
185932507f7c8c6ade20a546b060eb870915aa5d
948
947
2013-03-08T15:43:05Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
42cae7d40c7e56878f5d8de8e16d7dc00a18177b
Download libgc
0
231
1296
1295
2013-01-20T17:29:01Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham uses lib GC to garbage collect during execution, download it here</metadesc>
{{Applicationbox|name=Lib GC 7.2|author=Hans Boehm|desc=Garbage collector library used by the Mesham runtime library.|url=http://www.hpl.hp.com/personal/Hans_Boehm/gc/|image=Robot-cleaner.jpg|version=7.2|released=May 2012}}
== Introduction ==
The default runtime library uses the Boehm-Demers-Weiser conservative garbage collector. It allows one to allocate memory, without explicitly deallocating it when it is no longer useful. The collector automatically recycles memory when it determines that it can no longer be otherwise accessed.
== Download ==
We provide a download link '''[http://www.mesham.com/downloads/libgc64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/libgc32.zip 32 bit here''' to precompiled library versions of this which is all that is required to use Mesham. We suggest you use these provided, precompiled forms as they have been tested with Mesham. It is likely that future versions (later than 7.2) will work fine although they might not necessarily have been tested.
You can access further information, documentation and download the latest source code from the project website [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ here].
6f0107dd109da09f4a971bb9f90eb34e3928bfc0
1297
1296
2013-01-20T17:29:18Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham uses lib GC to garbage collect during execution, download it here</metadesc>
{{Applicationbox|name=Lib GC 7.2|author=Hans Boehm|desc=Garbage collector library used by the Mesham runtime library.|url=http://www.hpl.hp.com/personal/Hans_Boehm/gc/|image=Robot-cleaner.jpg|version=7.2|released=May 2012}}
== Introduction ==
The default runtime library uses the Boehm-Demers-Weiser conservative garbage collector. It allows one to allocate memory, without explicitly deallocating it when it is no longer useful. The collector automatically recycles memory when it determines that it can no longer be otherwise accessed.
== Download ==
We provide a download link '''[http://www.mesham.com/downloads/libgc64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/libgc32.zip 32 bit here]''' to precompiled library versions of this which is all that is required to use Mesham. We suggest you use these provided, precompiled forms as they have been tested with Mesham. It is likely that future versions (later than 7.2) will work fine although they might not necessarily have been tested.
You can access further information, documentation and download the latest source code from the project website [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ here].
0f310d35309597c8cac05f1d1abff381a73eb351
Template:News
10
209
1136
1135
2013-01-20T19:17:32Z
Polas
1
wikitext
text/x-wiki
* Latest version of Mesham (1.0.0_241) made available for download
108f2ef35c203fb3e40762215dd3044759968905
1137
1136
2013-01-20T19:17:56Z
Polas
1
wikitext
text/x-wiki
* Latest version of Mesham (1.0.0_241) made available for [[Download 1.0|download]]
327c41935de93741f8d5504ad985301015733931
1138
1137
2013-01-20T19:18:10Z
Polas
1
wikitext
text/x-wiki
* Latest version of Mesham ''(1.0.0_241)'' made available for [[Download 1.0|download]]
a5d66c7a12271b0c8ad85562474c32d19f8a23b7
1139
1138
2013-03-08T15:42:37Z
Polas
1
wikitext
text/x-wiki
* Update to Mesham alpha release ''(1.0.0_299)'' available [[Download 1.0|here]]
46486705f4a12fa4343bc75d7e59647a545fe3af
1140
1139
2013-03-08T16:00:34Z
Polas
1
wikitext
text/x-wiki
* Specification version 1.0a4 released [http://www.mesham.com/downloads/specification1a4.pdf here]
* Update to Mesham alpha release ''(1.0.0_299)'' available [[Download 1.0|here]]
a50defe40534f2d591c114f7a579ac50dd9a8bcf
Tutorial - Dynamic Parallelism
0
237
1317
2013-01-31T16:07:39Z
Polas
1
Created page with '<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc> '''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] == Introduction == If you are following these tut…'
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''-p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
12c8a8afc5379c00a521d781b849835339e74ab2
1318
1317
2013-01-31T16:21:01Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''-p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
2854589b3a8d0b4a2bdc66549cee68c7b61d753a
1319
1318
2013-01-31T16:21:21Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''-p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
[[Category: Tutorials|Dynamic Parallelism]]
faeca19c0aca0840374ea08be95103d9b1dc9cf1
1320
1319
2013-02-01T15:00:43Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] :: [[Tutorial_-_Advanced Types|next]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''-p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
[[Category: Tutorials|Dynamic Parallelism]]
fcd3f98ee7e1886d9dd363b5becd2d01fb4f418f
Tutorial - Arrays
0
223
1233
1232
2013-01-31T16:21:51Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing collecting data together via arrays in Mesham</metadesc>
'''Tutorial number seven''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync a;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync a;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, it is just a list of the value ''8'', not what you expected? Well this is to be expected because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this does not complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
203e5cd30647b4015a9427fa9eb1374c24c298b8
Dartboard PI
0
139
764
763
2013-02-01T13:12:17Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var m:=64; // number of processes
function void main() {
var calculatedPi:array[Double,m]:: allocated[single[on[0]]];
var mypi:Double;
var p;
par p from 0 to m - 1 {
var darts:=10000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i;
for i from 0 to rounds - 1 {
mypi:=mypi + (4.0 * (throwdarts(darts) / darts));
};
mypi:=mypi / rounds;
calculatedPi[p]:=mypi;
};
sync;
proc 0 {
var avepi:Double;
var i;
for i from 0 to m - 1 {
avepi:=avepi + calculatedPi[i];
};
avepi:=avepi / m;
print(dtostring(avepi, "%.2f")+"\n");
};
};
function Double throwdarts(var darts:Int)
{
var score:Double;
var n:=0;
for n from 0 to darts - 1 {
var xcoord:=randomnumber(0,1);
var ycoord:=randomnumber(0,1);
if ((pow(xcoord,2) + pow(ycoord,2)) < 1.0) {
score++; // hit the dartboard!
};
};
return score;
};
''This code requires at least Mesham version 1.0''
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here] a legacy version for Mesham 0.5 can be downloaded [http://www.mesham.com/downloads/pi-0.5.mesh here]
[[Category:Example Codes]]
db223d6445a217c0d2bcba772e49bfc65e7481f2
Tutorial - Advanced Types
0
238
1324
2013-02-01T14:59:48Z
Polas
1
Created page with '<metadesc>Tutorial describing advanced type features of Mesham</metadesc> '''Tutorial number nine''' - [[Tutorial_-_Dynamic Parallelism|prev]] == Introduction == Mesham has a nu…'
wikitext
text/x-wiki
<metadesc>Tutorial describing advanced type features of Mesham</metadesc>
'''Tutorial number nine''' - [[Tutorial_-_Dynamic Parallelism|prev]]
== Introduction ==
Mesham has a number of advanced typing features over and above type chains and type coercion. In this tutorial we will look at some of this, how they might be used and how they can simplify your program code.
== Type Variables ==
The language has a concept of a type variable, which is a, compilation time, programmer defined type representing a more complex type chain. Let's have a look at this in more detail via an example
function void main() {
typevar typeA::=Int::allocated[multiple];
typevar typeB::=String::allocated[single[on[3]]];
var a:typeA;
var b:typeB;
};
In this example we create type type variables called ''typeA'' and ''typeB'' which represent different type chains. Then the actual program variables ''a'' and ''b'' are declared using these type variables. Notice how type assignment is using the ''::='' operator rather than normal program variable assignment which folows '':=''.
function void main() {
typevar typeA::=Int::allocated[multiple];
var a:typeA;
typeA::=String;
var b:typeA;
typeA::=typeA::const;
var c:typeA;
};
This example demonstrates assigning types and chains to existing type variables. At lines two and three we declare the type variable ''typeA'' and use it in the declaration of program variable ''a''. However, then on line five we modify the value of the type variable, ''typeA'' using the ''::='' operator to be a [[String]] instead. Then on line six we declare variable ''b'' using this type variable, which effectively sets the type to be a String. Line eight demonstrates how we can use the type variable in type chain modification and variable ''c'' is a constant [[String]].
'''Note:''' It is important to appreciate that type variables exist only during compilation, they do not exist at runtime and as such can not be used in conditional statements.
== Types of program variables ==
Mesham provides some additional keywords to help manage and reference the type of program variables, however it is imperative to remember that these are static only i.e. only exist during compilation.
=== Currenttype ===
Mesham has an inbuilt [[Currenttype|currenttype]] keyword which will result in the current type chain of a program variable.
a:currenttype a :: const;
a:a::const
In this code snippet both lines of code are identical, they will set the type of program variable ''a'' to be the current type chain combined with the [[Const|const]] type. Note that using a program variable in a type chain such as in the snippet above is a syntactic short cut for the current type (using the [[Currenttype|currenttype]] keyword) and either can be used.
=== Declaredtype ===
It can sometimes be useful to reference or even revert back to the declared type of a program variable later on in execution. To do this we supply the [[Declaredtype|declaredtype]] keyword.
function void main() {
var a:Int;
a:a::const;
a:declaredtype a;
a:=23;
};
This code will compile and work fine because, although we are coercing the type of ''a'' to be that of the [[Const|const]] type at line three, on line four we are reverting the type to be the declared type of the program variable. If you are unsure about why this is the case, then move the assignment around to see when the code will not compile with it.
== An example ==
Type variables are commonly used with [[Record|records]] and [[Referencerecord|referencerecords]]. In fact, the [[Complex|complex]] type obtained from the [[:Category:Maths_Functions|maths library]] is in fact a type variable.
#include <string>
#include <io>
typevar node;
node::=referencerecord[Int, "data", node, "next"]::heap;
function void main() {
var i;
var root:node;
root:=null;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=root;
root:=newnode;
};
while (root != null) {
print(itostring(root.data)+"\n");
root:=root.next;
};
};
This code will build up a linked list of numbers and then walk it, displaying each number as it goes. Whilst it is a relatively simple code, it illustrates how one might use type variables to improve the readability of their code. One important point to note is a current limitation in the Mesham parser and that is the fact that we are forced to declare the type variable ''node'' on line four and then separately assign to it at line five. The reason for this is that in this assignment we are referencing back to the ''node'' type variable in the [[Referencerecord|referencerecord]] type and as such it must exist.
== Limitations ==
There are some important limitations to note about the current use of types. Types currently only exist explicitly during compilation - what this means is that you can not do things such as passing them into functions or communicating them. Additionally, once allocation information (the [[Allocated|allocated]] type and its subtypes have been set then you can not modify this, nor can you change the [[:Category:Element_Types|element type]].
[[Category: Tutorials|Advanced Types]]
72910f522923cc5963b05c0e090c72e374a55a65
1325
1324
2013-02-01T15:00:14Z
Polas
1
/* Limitations */
wikitext
text/x-wiki
<metadesc>Tutorial describing advanced type features of Mesham</metadesc>
'''Tutorial number nine''' - [[Tutorial_-_Dynamic Parallelism|prev]]
== Introduction ==
Mesham has a number of advanced typing features over and above type chains and type coercion. In this tutorial we will look at some of this, how they might be used and how they can simplify your program code.
== Type Variables ==
The language has a concept of a type variable, which is a, compilation time, programmer defined type representing a more complex type chain. Let's have a look at this in more detail via an example
function void main() {
typevar typeA::=Int::allocated[multiple];
typevar typeB::=String::allocated[single[on[3]]];
var a:typeA;
var b:typeB;
};
In this example we create type type variables called ''typeA'' and ''typeB'' which represent different type chains. Then the actual program variables ''a'' and ''b'' are declared using these type variables. Notice how type assignment is using the ''::='' operator rather than normal program variable assignment which folows '':=''.
function void main() {
typevar typeA::=Int::allocated[multiple];
var a:typeA;
typeA::=String;
var b:typeA;
typeA::=typeA::const;
var c:typeA;
};
This example demonstrates assigning types and chains to existing type variables. At lines two and three we declare the type variable ''typeA'' and use it in the declaration of program variable ''a''. However, then on line five we modify the value of the type variable, ''typeA'' using the ''::='' operator to be a [[String]] instead. Then on line six we declare variable ''b'' using this type variable, which effectively sets the type to be a String. Line eight demonstrates how we can use the type variable in type chain modification and variable ''c'' is a constant [[String]].
'''Note:''' It is important to appreciate that type variables exist only during compilation, they do not exist at runtime and as such can not be used in conditional statements.
== Types of program variables ==
Mesham provides some additional keywords to help manage and reference the type of program variables, however it is imperative to remember that these are static only i.e. only exist during compilation.
=== Currenttype ===
Mesham has an inbuilt [[Currenttype|currenttype]] keyword which will result in the current type chain of a program variable.
a:currenttype a :: const;
a:a::const
In this code snippet both lines of code are identical, they will set the type of program variable ''a'' to be the current type chain combined with the [[Const|const]] type. Note that using a program variable in a type chain such as in the snippet above is a syntactic short cut for the current type (using the [[Currenttype|currenttype]] keyword) and either can be used.
=== Declaredtype ===
It can sometimes be useful to reference or even revert back to the declared type of a program variable later on in execution. To do this we supply the [[Declaredtype|declaredtype]] keyword.
function void main() {
var a:Int;
a:a::const;
a:declaredtype a;
a:=23;
};
This code will compile and work fine because, although we are coercing the type of ''a'' to be that of the [[Const|const]] type at line three, on line four we are reverting the type to be the declared type of the program variable. If you are unsure about why this is the case, then move the assignment around to see when the code will not compile with it.
== An example ==
Type variables are commonly used with [[Record|records]] and [[Referencerecord|referencerecords]]. In fact, the [[Complex|complex]] type obtained from the [[:Category:Maths_Functions|maths library]] is in fact a type variable.
#include <string>
#include <io>
typevar node;
node::=referencerecord[Int, "data", node, "next"]::heap;
function void main() {
var i;
var root:node;
root:=null;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=root;
root:=newnode;
};
while (root != null) {
print(itostring(root.data)+"\n");
root:=root.next;
};
};
This code will build up a linked list of numbers and then walk it, displaying each number as it goes. Whilst it is a relatively simple code, it illustrates how one might use type variables to improve the readability of their code. One important point to note is a current limitation in the Mesham parser and that is the fact that we are forced to declare the type variable ''node'' on line four and then separately assign to it at line five. The reason for this is that in this assignment we are referencing back to the ''node'' type variable in the [[Referencerecord|referencerecord]] type and as such it must exist.
== Limitations ==
There are some important limitations to note about the current use of types. Types currently only exist explicitly during compilation - what this means is that you can not do things such as passing them into functions or communicating them. Additionally, once allocation information (the [[Allocated|allocated]] type) and its subtypes have been set then you can not modify this, nor can you change the [[:Category:Element_Types|element type]].
[[Category: Tutorials|Advanced Types]]
6d8a56fd7500b141ce293a8e1e02f5cc91091c63
Mandelbrot
0
135
742
741
2013-02-04T13:46:29Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Int;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
''This code is compatible with Mesham version 1.0 and later''
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here] or a legacy Mesham 0.5 version [http://www.mesham.com/downloads/mandle-0.5.mesh here]
[[Category:Example Codes]]
108c9b66d317c6c409e982d68a71e2867000d236
Exp
0
239
1329
2013-02-07T10:34:47Z
Polas
1
Created page with '== Overview == This exp(x) function will return the exponent of ''x'' (e to the power of ''x''). * '''Pass:''' A [[Double]] * '''Returns:''' A [[Double]] representing the expon…'
wikitext
text/x-wiki
== Overview ==
This exp(x) function will return the exponent of ''x'' (e to the power of ''x'').
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the exponent
== Example ==
#include <maths>
var a:=exp(23.4);
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
e7b348c9733d2a7b9fb7862413157b50aad793bc
Tutorial - Functions
0
220
1205
1204
2013-02-07T12:12:28Z
Polas
1
/* My first function */
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of functions and functional abstraction in Mesham</metadesc>
'''Tutorial number three''' - [[Tutorial_-_Simple Types|prev]] :: [[Tutorial_-_Parallel Constructs|next]]
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable. We shall also take a look at how to provide optional command line arguments to some Mesham code.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that it still works? This is because functions in Mesham can be in any order and it is up to the programmer to decide what order makes their code most readable. As an exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
== Leaving a function ==
Regardless of whether we are returning data from a function or not, we can use the [[Return|return]] statement on its own to force leaving that function.
function void myTestFunction(var b:Int) {
if (b==2) return;
};
In the above code if variable ''b'' has a value of ''2'' then we will leave the function early. Note that we have not followed the conditional by an explicit block - this is allowed (as in many languages) for a single statement.
As an exercise add some value after the return statement so, for example, it reads something like like ''return 23;'' - now attempt to recompile and see that you get an error, because in this case we are attempting to return a value when the function's definition reports that it does no such thing.
== Command line arguments ==
The main function also supports the reading of command line arguments. By definition you can provide the main function with either no function arguments (as we have seen up until this point) or alternatively two arguments, the first an [[Int]] and the second an [[Array|array]] of [[String|Strings]].
#include <io>
#include <string>
function void main(var argc:Int, var argv:array[String]) {
var i;
for i from 0 to argc - 1 {
print(itostring(i)+": "+argv[i]+"\n");
};
};
Compile and run the above code, with no arguments you will just see the name of the program, if you now supply command line arguments (separated by a space) then these will also be displayed. There are a couple of general points to note about the code above. Firstly, the variable names ''argc'' and ''argv'' for the command line arguments are the generally accepted names to use - although you can call these variables what ever you want if you are so inclined.
Secondly notice how we only tell the [[Array|array]] type that is is a collection of [[String|Strings]] and not any information about its dimensions, this is allowed in a function argument's type as we don't always know the size, but will limit us to one dimension and stop any error checking from happening on the index bounds used to access elements. Lastly see how we are looping from 0 to ''argc - 1'', the [[For|for]] loop is inclusive of the bounds so ''argc'' were zero then one iteration would still occur which is not what we want here.
[[Category:Tutorials|Functions]]
071e1f87c2a958d2d18c42172fb1ea1328053716
The Compiler
0
225
1255
1254
2013-02-07T14:45:58Z
Polas
1
/* Command line options */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
649fb9cf048a5903427bbee755611605b0615f29
1256
1255
2013-02-07T14:47:14Z
Polas
1
/* Environment variables */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
8aefb1594a726ed7f89a1a3995e8d064b4dbcfc4
1257
1256
2013-02-23T18:36:39Z
Polas
1
/* Environment variables */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_COMPILER_ARGS''' ''Optional arguments to supply to the C compiler, for instance optimisation flags''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
1266d0b2caa2c41f183476939a40a9d5f22effa3
Specification
0
177
976
975
2013-03-08T15:46:04Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Specification 1.0a_4|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham language specification|url=http://www.mesham.com|image=Spec.png|version=1.0a_4|released=February 2013}}
''The latest version of the Mesham language specification is 1.0a_4''
== Version 1.0a_4 - February 2013 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_4 is available for download. This version was released February 2013 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a4.pdf this latest version here]
a17eb08d4dc829ac531a732242d4e58aa2d2c567
MediaWiki:Aboutsite
8
240
1332
2013-03-08T16:20:28Z
Polas
1
Created page with 'About Mesham'
wikitext
text/x-wiki
About Mesham
1a225dd5f20931244854af8a4f66fee7030eca49
Findchar
0
241
1334
2013-03-20T16:30:32Z
Polas
1
Created page with '== Overview == This findchar(s, c) function will return the index of the first occurrence of character ''c'' in string ''s''. * '''Pass:''' A [[String]] and [[Char]] * '''Retur…'
wikitext
text/x-wiki
== Overview ==
This findchar(s, c) function will return the index of the first occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
var a:="hello";
var c:=findchar(a,'l');
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
39485811e333055240f0402a46343edd8db914ff
New Compiler
0
157
857
856
2013-03-24T13:30:07Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
''Completed March 2013''
We have completely rewritten the Mesham compiler from the [[Arjuna]] line (up to version 0.5) and created the [[Oubliette]] line (from version 1.0 onwards.) Further details about these compilers can be found on their respective pages. The previous [[Arjuna]] line are deprecated.
The following is a statement of intent that we wrote when deciding to rewrite the compiler:
The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the compilation phase. To this end we are looking to reimplement the compiler in C. This reimplementation will allow us to combine all aspects of the compiler in one package, remove depreciated implementation code, clean up aspects of the compilation process, fix compiler bugs and provide a structured framework from which types can fit in.
Like previous versions of the compiler, the results will be completely portable.
This page will be updated with news and developments in relation to this new compiler implementation.
1dc4ee8b61e1ef318bac31dc545238e396f6a016
858
857
2013-03-24T13:30:38Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
''Completed March 2013''
We have completely rewritten the Mesham compiler from the [[Arjuna]] line (up to version 0.5) and created the [[Oubliette]] line (from version 1.0 onwards.) Further details about these compilers can be found on their respective pages. The previous [[Arjuna]] line are deprecated.
----
''The following is a statement of intent that we wrote when deciding to rewrite the compiler''
The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the compilation phase. To this end we are looking to reimplement the compiler in C. This reimplementation will allow us to combine all aspects of the compiler in one package, remove depreciated implementation code, clean up aspects of the compilation process, fix compiler bugs and provide a structured framework from which types can fit in.
Like previous versions of the compiler, the results will be completely portable.
This page will be updated with news and developments in relation to this new compiler implementation.
c0eb3a0b4e46394b36d70cbcd32af3b802beb6a3
General Additions
0
155
851
850
2013-03-24T13:31:20Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Accepted Additions ==
# [[Extendable Types]] - 0%
# Structure IO types - 0%
# Additional distribution types - 30%
# Group keyword - 100%
== Wish List ==
Please add here any features you would like to see in the upcomming development of Mesham
97a88d2fe5e38eab0a9c2fbf41c903290196bb3a
Extendable Types
0
154
846
845
2013-03-24T13:33:34Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the compiler level, this is not a major issue at the moment due to generality of the type library however it does limit the language somewhat. Whilst it is relatively simple to create new types in this way, one can not expect the programmer to have to modify the compiler in order to support the codes they wish to write. There are a number of issues to consider however in relation to this aim.
* How to implement this efficiently?
* How to maximise static analysis and optimisation?
* How to minimise memory footprint?
* The ideal way of structuring the programming interface?
----
We have currently adopted a middle ground within the [[Oubliette]] compiler line in as much as additional types may be provided as third party plugins which the compiler will identify with and allow the programmer to use freely. There is the optional support for these third party types to provide additional runtime library services too. Whilst this is a reasonable interim step, the end goal is still to allow for programmers to specify types within their own Mesham source code.
8d24e837d44b9a608f757297c08168a48b7040b7
Oubliette
0
176
949
948
2013-04-14T17:49:36Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
f6e91a42933eeb35dd5a626d0564cb8b6b7d7442
950
949
2013-04-15T11:07:18Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
First alpha release of the oubliette compliler
d3241f8e2c1a6e3918ea9300b81b139e15e11890
951
950
2013-04-15T11:07:33Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
944bdd49092c3d11782d0e81200d03f3e1e00a4b
Findrchar
0
242
1337
2013-04-15T10:21:09Z
Polas
1
Created page with '== Overview == This findrchar(s, c) function will return the index of the last occurrence of character ''c'' in string ''s''. * '''Pass:''' A [[String]] and [[Char]] * '''Retur…'
wikitext
text/x-wiki
== Overview ==
This findrchar(s, c) function will return the index of the last occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
var a:="hello";
var c:=findrchar(a,'l');
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
9e1c5e5558d051332e51e4bd96d9327239180dfc
Findstr
0
243
1340
2013-04-15T10:22:29Z
Polas
1
Created page with '== Overview == This findstr(s, s2) function will return the index of the first occurrence of search string ''s2'' in text string ''s''. * '''Pass:''' Two [[String|Strings]] * '…'
wikitext
text/x-wiki
== Overview ==
This findstr(s, s2) function will return the index of the first occurrence of search string ''s2'' in text string ''s''.
* '''Pass:''' Two [[String|Strings]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
var a:="hello";
var c:=findstr(a,'el');
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
0343054cd4743f23c5b10b39b690414d411aab0f
Trim
0
244
1343
2013-04-15T10:24:06Z
Polas
1
Created page with '== Overview == This trim(s) function will return a new string where the leading and trailing whitespace of string ''s'' has been removed. * '''Pass:''' A [[String]] * '''Return…'
wikitext
text/x-wiki
== Overview ==
This trim(s) function will return a new string where the leading and trailing whitespace of string ''s'' has been removed.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
#include <io>
function void main() {
var m:=" hello world ";
print(m+"-\n"+trim(m)+"-\n");
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
227970822be9bf5e65af0f815afb030822c05422
Operators
0
43
243
242
2013-04-15T11:04:06Z
Polas
1
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#?: Inline if operator
#|| Logical OR
#&& Logical AND
[[Category:Core Mesham]]
3651f61b638d87890d7de0ccecf29752492650fa
Group
0
181
1001
1000
2013-04-15T14:05:54Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loop, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
''Note:'' Texas range of ... is supported, although this can only be between values (specifies a range) and the previous value must be smaller than or equal to the following one.
== Example ==
#include <io>
function void main() {
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
group 1,...,3,5,...,8 {
print("Hello world from pid 1, 2, 3, 5, 6, 7 or 8\n");
};
};
The code fragment will involve 9 processes (0 to 8 inclusive.) Only process zero and process three will display the first message and the second is displayed by more as described by the texas range.
''Since: Version 1.0''
[[Category:Parallel]]
19ed4c20b7b1ef5c27a6f755a1d1087cfa9143fa
1002
1001
2013-04-15T14:06:02Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks, either values of variables known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loop, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
''Note:'' Texas range of ... is supported, although this can only be between values (specifies a range) and the previous value must be smaller than or equal to the following one.
== Example ==
#include <io>
function void main() {
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
group 1,...,3,5,...,8 {
print("Hello world from pid 1, 2, 3, 5, 6, 7 or 8\n");
};
};
The code fragment will involve 9 processes (0 to 8 inclusive.) Only process zero and process three will display the first message and the second is displayed by more as described by the texas range.
''Since: Version 1.0''
[[Category:Parallel]]
49e35cb75ec2b42035d8438de8c1da6270e28c45
1003
1002
2013-04-15T14:06:35Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks; values, variables or texas range (with limits) known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loop, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
''Note:'' Texas range of ... is supported, although this can only be between values (specifies a range) and the previous value must be smaller than or equal to the following one.
== Example ==
#include <io>
function void main() {
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
group 1,...,3,5,...,8 {
print("Hello world from pid 1, 2, 3, 5, 6, 7 or 8\n");
};
};
The code fragment will involve 9 processes (0 to 8 inclusive.) Only process zero and process three will display the first message and the second is displayed by more as described by the texas range.
''Since: Version 1.0''
[[Category:Parallel]]
6ecda0f8a7d161ba0a398a032d4765937f8d6c5d
Par
0
39
218
217
2013-04-15T14:07:10Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
function void main() {
var p;
par p from 0 to 9 {
print("Hello world\n");
};
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
''Since: Version 0.41b''
[[Category:Parallel]]
8dd2d27f351e97711526c257bc34f71000048f27
Proc
0
40
228
227
2013-04-15T14:07:40Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.<br>
''Note:'' This is a blocking construct and regardless of arguments involves all processes who will either ignore it or execute the block.
== Example ==
#include <io>
function void main() {
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
''Since: Version 0.41b''
[[Category:Parallel]]
a67e0c1571b1bda9b3bcc957872a9bc01ba45b94
Assignment
0
26
140
139
2013-04-15T14:08:20Z
Polas
1
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a memory reference and ''rvalue'' a memory reference or expression
== Semantics==
Will assign a ''lvalue'' to ''rvalue''.
== Examples==
function void main() {
var i:=4;
var j:=i;
};
In this example the variable ''i'' will be declared and set to value 4, and the variable ''j'' also declared and set to the value of ''i'' (4.) Via type inference the types of both variables will be that of ''Int''
''Since: Version 0.41b''
[[Category:sequential]]
93d7df635751b7943577852f9c4cdaf68b8a2205
Break
0
29
157
156
2013-04-15T14:08:37Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
break;
== Semantics ==
Will break out of the current enclosing loop body
== Example ==
function void main() {
while (true) { break; };
};
Only one iteration of the loop will complete, where it will break out of the body.
''Since: Version 0.41b''
[[Category:sequential]]
408e81bc84db59b6551ab1ff27267244cacc1ee2
If
0
32
173
172
2013-04-15T14:08:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
if (condition)<br>
{<br>
then body<br>
} else {<br>
else body<br>
};<br>
== Semantics ==
Will evaluate the condition and, if true will execute the code in the ''then body.'' Optionally, if the condition is false then the code in the ''else body'' will be executed if this has been supplied by the programmer.
== Example ==
#include <io>
function void main() {
if (a==b) {
print("Equal");
};
};
In this code example two variables ''a'' and ''b'' are tested for equality. If equal then the message will be displayed. As no else section has been specified then no specific behaviour will be adopted if they are unequal.
''Since: Version 0.41b''
[[Category:sequential]]
bc1ec14c9916f451533963b4892460eaa5bd552e
Currenttype
0
99
555
554
2013-04-15T14:09:12Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
currentype varname;
== Semantics ==
Will return the current type of the variable.<br><br>
''Note:'' If a variable is used within a type context then this is assumed to be shorthand for the current type of that variable<br>
''Note:'' This is a static construct and hence only available during compilation. It must be statically deducible and not used in a manner that is dynamic.
== Example ==
function void main() {
var i: Int;
var q:currentype i;
};
Will declare ''q'' to be an integer the same type as ''i''.
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
217b7e0a9ebf06a97b6b4383d196959d015c0cf6
Declaration
0
24
132
131
2013-04-15T14:09:41Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
All variables must be declared before they are used. In Mesham one may declare a variable via its value or explicit type.
var name;<br>
var name:=[Value];<br>
var name:[Type];<br>
Where ''name'' is the name of the variable being declared.
== Semantics ==
The environment will map the identifier to storage location and that variable is now usable. In the case of a value being specified then the compiler will infer the type via type inference either here or when the first assignment takes place.<br><br>
''Note:'' It is not possible to declare a variable with the value ''null'' as this is a special, no value, placer and as such has no type.
== Examples ==
function void main() {
var a;
var b:=99;
a:="hello";
};
In the code example above, the variable ''a'' is declared, without any further information the type is infered by its first use (to hold type String.) Variable ''b'' is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes.
function void main() {
var t:Char;
var z:Char :: allocated[single[on[2]]];
};
Variable ''t'' is declared to be a character, without further type information it is also assumed to be on all processes (by default the type Char is allocated to all processes.) Lastly, the variable ''z'' is declared to be of type character, but is allocated only on a single process (process 2.)
''Since: Version 0.41b''
[[Category:sequential]]
bdb646e3f7d4fe641c6e25916463c9fc4a39c32e
Declaredtype
0
100
561
560
2013-04-15T14:09:59Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
declaredtype name
Where ''name'' is a variable name
== Semantics ==
Will return the declared type of the variable.<br><br>
''Note:'' This is a static construct only and its lifetime is limited to during compilation.
== Example ==
function void main() {
var i:Int;
i:i::const[];
i:declaredtype i;
};
This code example will firstly type ''i'' to be an [[Int]]. On line 2, the type of ''i'' is combined with the type [[const]] (enforcing read only access to the variable's data.) On line 3, the programmer is reverting the variable back to its declared type (i.e. so one can write to the data.)
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
d075683e34b2162a57ddbfff3aee30f3472f406c
For
0
27
147
146
2013-04-15T14:10:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
for i from a to b <br>
{<br>
forbody<br>
{
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
#include <io>
#include <string>
function void main() {
var i;
for i from 0 to 9 {
print(itostring(i)+"\n");
};
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
''Since: Version 0.41b''
[[Category:sequential]]
512654e7fa671e112340ae465d44e201733663b3
Sequential Composition
0
34
180
179
2013-04-15T14:10:47Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
body ; body
== Semantics ==
Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequential composition.<br><br>
''Note:'' Unlike many imperative languages, all blocks must be terminated by a form of composition (sequential or parallel.)
== Examples ==
function void main() {
var a:=12 ; a:=99
};
In the above example variable ''a'' is declared to be equal to 12, after this the variable is then modified to hold the value of 99.
function void main() {
function1() ; function2()
};
In the second example ''function1'' will execute and then after (if it terminates) the function ''function2'' will be called.
''Since: Version 0.41b''
[[category:sequential]]
f037be84f6a43c186db4b2777331bc1b275856e0
Throw
0
31
168
167
2013-04-15T14:11:13Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
#include <io>
function void main() {
try {
throw "an error"
} catch "an error" {
print("Error occurred!\n");
};
};
In this example, a programmer defined error ''an error'' is thrown and caught.
''Since: Version 0.5''
[[Category:sequential]]
7d9f05f570df25685680b1deba0b779c485cb5a2
Try
0
30
162
161
2013-04-15T14:11:43Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
try<br>
{<br>
try body<br>
} catch (error string) { <br>
error handing code<br>
}<br>
== Semantics ==
Will execute the code in the try body and handle any errors. This is very important in parallel computing as it allows the programmer to easily deal with any communication errors that may occur. Exception handling is dynamic in Mesham and the last appropriate catch block will be entered into depending on program flow.
== Error Strings ==
There are a number of error strings build into Mesham, additional ones can be specified by the programmer.
*Array Bounds - Accessing an array outside its bounds
*Divide by zero - Divide by zero error
*Memory Out - Memory allocation failure
*root Illegal - root process in communication
*rank Illegal - rank in communication
*buffer Illegal - buffer in communication
*count - Count wrong in communication
*type - Communication type error
*comm - Communication communicator error
*truncate - Truncation error in communication
*Group - Illegal group in communication
*op - Illegal operation for communication
*arg - Arguments used for communication incorrect
*oscli - Error returned by operating system when performing a system call
== Example ==
#include <io>
#include <string>
function void main() {
try {
var t:array[Int,10];
print(itostring(a[12]));
} catch ("Array Bounds") {
print("No Such Index\n");
};
};
In this example the programmer is trying to access element 12 of array ''a''. If this does not exist, then instead of that element being displayed an error message is put on the screen.
''Since: Version 0.5''
[[Category:sequential]]
dc873c1361d5c5abb2e9527611677cbe186602a4
While
0
28
152
151
2013-04-15T14:12:04Z
Polas
1
wikitext
text/x-wiki
==Syntax==
while (condition) whilebody;
==Semantics==
Will loop whilst the condition holds.
== Examples ==
function void main() {
var a:=10;
while (a > 0) {
a--;
};
};
Will loop, each time decreasing the value of variable ''a'' by 1 until the value is too small (0).
''Since: Version 0.41b''
[[Category:Sequential]]
b94b3ba77562d71ebe482e5599f418ac248b9bbe
Category:Types
14
98
550
549
2013-04-15T14:13:01Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents the current type of a variable and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
All element types start with a capitalised first letter and there must be at least one element type per type chain. Compound types start with a small case and contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
Types may be referred to with or without arguments, it is therefore optional to specify square braces, ''[]'' after a type with or without data in.
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
function void main() {
var i:Int :: allocated[multiple[]];
};
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
function void main() {
var m:String;
};
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information (via the ''allocation'' type) may not be changed. Type modification such as this binds to the current block, the type is reverted back to its previous value once that block has been left.
=== Examples ===
function void main() {
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
};
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's current type can be coerced with additional types just for that expression.
=== Example ===
function void main() {
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
};
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
a7b716165dac3a58ff84bee985e129d3307d24d6
Type Variables
0
101
566
565
2013-04-15T14:13:23Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
typevar name::=type;
name::=type;
Note how ''::='' is used rather than '':=''
''typevar'' is the type equivalent of ''var''
== Semantics ==
Type variables allow the programmer to assign types and type combinations to variables for use as normal program variables. These exist only statically (in compilation) and are not present in the runtime semantics.
== Example ==
function void main() {
typevar m::=Int :: allocated[multiple[]];
var f:m;
typevar q::=declaredtype f;
q::=m;
};
In the above code example, the type variable ''m'' has the type value ''Int :: allocated[multiple[]]'' assigned to it. On line 2, a new (program) variable is created using this new type variable. In line 3, the type variable ''q'' is declared and has the value of the declared type of program variable ''f''. Lastly in line 4, type variable ''q'' changes its value to become that of type variable ''m''. Although type variables can be thought of as the programmer creating new types, they can also be used like program variables in cases such as equality tests and assignment.
''Since: Version 0.5''
[[Category:Types]]
c18308550a08b9c0f21eccd7c4e097cba79cb6da
Allocated
0
62
334
333
2013-04-15T14:14:05Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allocated[type]
Where ''type'' is optional
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
function void main() {
var i: Int :: allocated[];
};
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
b24412163f3b57beb406f819cf40c539bc63f5fa
Allreduce
0
82
453
452
2013-04-15T14:14:32Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
function void main() {
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(x::allreduce["min"]):=p;
};
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
f561cbfab20c8d3e1ea1f794556cb53f7ab1cbeb
Alltoall
0
81
446
445
2013-04-15T14:15:41Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
alltoall[elementsoneach]
== Semantics ==
Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in the group.
== Example ==
function void main() {
x:array[Int,12]::allocated[multiple[]];
var r:array[Int,3]::allocated[multiple[]];
var p;
par p from 0 to 3 {
(x:alltoall[3]):=r;
};
};
In this example each process sends every other process three elements (the elements in its ''r''.) Therefore each process ends up with twelve elements in ''x'', the location of each is based on the source processes's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
715787adb1d21ed672dc76d5a4e824861dc7cc3c
Array
0
71
388
387
2013-04-15T14:16:05Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| Communication to process i
|-
| multiple[]
| single[on[i]]
| broadcast from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
function void main() {
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
};
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
c39505c6d86fe236019000e21a1560ae1787be3f
Async
0
83
459
458
2013-04-15T14:16:59Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
function void main() {
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2 {
a:=89;
var q:=20;
q:=a;
sync q;
};
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
};
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
07d00f232b51e34fd49c4ae7b036005a83780309
Blocking
0
84
465
464
2013-04-15T14:17:19Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
};
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
1a916b2a9e2c79154094eb7f50e9f9b5cc5d2676
Bool
0
49
278
277
2013-04-15T14:17:37Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Bool
== Semantics ==
A true or false value
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Bool;
var x:=true;
};
In this example variable ''i'' is explicitly declared to be of type ''Bool''. Variable ''x'' is declared to be of value ''true'' which via type inference results in its type also becomming ''Bool''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
61cd134a6211d42a250c7a78545120a531d7f9c5
Broadcast
0
78
431
430
2013-04-15T14:18:02Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
function void main() {
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(a::broadcast[2]):=23;
};
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
03ad9faa79774e87bcc4735feb12340962787ef9
Buffered
0
87
484
483
2013-04-15T14:18:21Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
};
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
50c6962feabfcd511f17efec01dec17a438123d3
Channel
0
74
407
406
2013-04-15T14:18:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.<br><br>
''Note:'' By default (no further type information) all channel communication is blocking using standard send.<br>
''Note:'' If no allocation information is specified with the channel type then the underlying variable will not be assigned any memory - it is instead an abstract connection in this case.
== Example ==
function void main() {
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2 {
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
017370ae8fb49bea2ebec6633a0c741397e8921f
Char
0
50
284
283
2013-04-15T14:18:58Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An 8 bit ASCII character
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Char;
var r:='a';
};
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
fed9f001ad7720d80d580b97ffdb7093490cce8b
Col
0
73
401
400
2013-04-15T14:19:14Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In column major allocation the first dimension is the least major and last dimension most major
== Example ==
function void main() {
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
};
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
5ceda5448223aaecc60dc57d8341983da56a52cb
Commgroup
0
64
346
345
2013-04-15T14:19:28Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist.
== Example ==
function void main() {
var i:Int :: allocated[multiple[commgroup[1,3]]];
};
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
ebd9d30c512175e6c85622020d3a0f0cfdd0beaa
Const
0
66
357
356
2013-04-15T14:19:55Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
const[ ]
== Semantics ==
Enforces the read only property of a variable.
== Example ==
function void main() {
var a:Int;
a:=34;
a:(a :: const[]);
a:=33;
};
The code in the above example will produce an error. Whilst the first assignment (''a:=34'') is legal, on the subsequent line the programmer has modified the type of ''a'' to be that of ''a'' combined with the type ''const''. The second assignment is attempting the modify a now read only variable and will fail.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
02b303fc0decec05fb087ac6a22055e71f02c14c
Directref
0
70
377
376
2013-04-15T14:20:09Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
function void main() {
var pid:Int :: allocated[multiple[]] :: directref[];
};
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
62f811435d57f522da752efa1e30827f4b9b8749
Double
0
48
273
272
2013-04-15T14:20:24Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision 64 bit floating point number. This is the type given to constant floating point numbers that appear in program code.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Double;
};
In this example variable ''i'' is explicitly declared to be of type ''Double''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
b7fe5a9eb26c4db5128d1512334b45663c564529
Evendist
0
95
527
526
2013-04-15T14:20:58Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
function void main() {
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3 {
var q:=b[p][2][3];
var r:=a[p][2][3];
var s:=b :: horizontal[][p][2][3];
};
a:=e;
};
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
a3d17fd7606dcd26e3fbe842d3e71a2dfa31e0f8
File
0
52
296
295
2013-04-15T14:21:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:File;
};
In this example variable ''i'' is explicitly declared to be of type ''File''.
''Since: Version 0.41b''
== Communication ==
It is not currently possible to communicate file handles due to operating system constraints.
[[Category:Element Types]]
[[Category:Type Library]]
92b15263b845093ec2b1258c275a9fe25ea23606
Float
0
47
267
266
2013-04-15T14:21:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A 32 bit floating point number
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Float;
};
In this example variable ''i'' is explicitly declared to be of type ''Float''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
a2465b2c1f8ed114a674a125799f7da2b547712a
Gather
0
79
436
435
2013-04-15T14:21:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
gather[elements,root]
== Semantics ==
Gather a number of elements (equal to ''elements'') from each process and send these to the root process.
== Example ==
function void main() {
var x:array[Int,12] :: allocated[single[on[2]]];
var r:array[Int,3] :: allocated[multiple[]];
var p;
par p from 0 to 3 {
(x::gather[3,2]):=r;
};
};
In this example, the variable ''x'' is allocated on the root process (2) only. Whereas ''r'' is allocated on all processes. In the assignment all three elements of ''r'' are gathered from each process and sent to the root process (2) and then placed into variable ''x'' in the order defined by the source's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
e4a03011d4a685bd754193f6ff3f264bdc0e5997
Heap
0
185
1025
1024
2013-04-15T14:22:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
function void main() {
var i:Int :: allocated[heap];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
75eba820c64997cc5b3af905d3cefc01f4ec6f13
Int
0
45
255
254
2013-04-15T14:22:42Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number. This is also the type of integer constants.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Int;
var b:=12;
};
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
bdaff79c7868cffdc1ffc373426196718021a549
Long
0
53
301
300
2013-04-15T14:22:57Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Long
== Semantics ==
A long 64 bit number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Long;
};
In this example variable ''i'' is explicitly declared to be of type ''Long''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
63d47e595b62f0bad6e8c5cdff2e6e0c1f63073c
Multiple
0
63
339
338
2013-04-15T14:23:11Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope.
== Example ==
function void main() {
var i: Int :: allocated[multiple[]];
};
In this example the variable ''i'' is an integer, allocated to all processes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
e8efda4977cc95d5579f8485ba6c9f501c5e3d53
Nonblocking
0
85
471
470
2013-04-15T14:23:29Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
};
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
653752188a33b60292d65aa33576345130c98de8
Onesided
0
76
418
417
2013-04-15T14:23:47Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
function void main() {
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
};
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
7c0ff4ce4c8a57a8d60c76c1158b2439b77f5bcc
Ready
0
88
491
490
2013-04-15T14:24:07Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
};
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
21bdd8ab0eb0a389b37a343c45f73493cbec3f78
Record
0
96
535
534
2013-04-15T14:24:32Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
== Example ==
function void main() {
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
};
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
efc39c9403ee2e1e18968e6cc3d099670c7d384d
Reduce
0
77
425
424
2013-04-15T14:25:00Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Example ==
function void main() {
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
760fffc606dd80b0b556dd9cef544a44eb693696
Referencerecord
0
97
542
541
2013-04-15T14:25:56Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[heap]]
''Currently communication is not available for reference records, this will be fixed at some point in the future.''
== Example ==
#include <io>
#include <string>
typevar node;
node::=referencerecord["prev",node,"Int",data,"next",node];
function void main() {
var head:node;
head:=null;
var i;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null) {
print(itostring(head.data)+"\n");
head:=head.next;
};
};
In this code example a doubly linked list is created, and then its contents read node by node.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
93fccbcb8408dc735075a3cd715e43a3828471e3
Row
0
72
395
394
2013-04-15T14:26:16Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In row major allocation the first dimension is the most major and the last most minor.
== Example ==
function void main() {
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
};
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
1a1dd5c667218e633b48ebc4dd960d90c8a2363a
Scatter
0
80
441
440
2013-04-15T14:26:39Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
scatter[elements,root]
== Semantics ==
Will send a number of elements (equal to ''elements'') from the root process to all other processes.
== Example ==
function void main() {
var x:array[Int,3]::allocated[multiple[]];
var r:array[Int,12]::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::scatter[3,1]);
x:=r;
};
};
In this example, three elements of array ''r'', on process 1, are scattered to each other process and placed in their copy of ''r''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
44d165b64b97e8f9675dc560f2c6ff660a4623e7
Share
0
68
366
365
2013-04-15T14:26:57Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
share[name]
== Semantics ==
This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the memory of that specified as arguments to the type.) This is very useful in HPC applications as often processes are running at the limit of their resources. The type will share memory with that of the variable ''name'' in the above syntax. In order to keep this type safe, the sharee must be smaller than or of equal size to the memory chunk, this is error checked.
== Example ==
function void main() {
var a:Int::allocated[multiple[]];
var c:Int::allocated[multiple[] :: share[a]];
var e:array[Int,10]::allocated[single[on[1]]];
var u:array[Char,12]::allocated[single[on[1]] :: share[e]];
};
In the example above, the variables ''a'' and ''c'' will share the same memory. The variables ''e'' and ''u'' will also share the same memory. There is some potential concern that this might result in an error - as the size of ''u'' array is 12, and size of ''e'' array is only 10. If the two arrays have different types then this size will be checked dynamically - as an int is 32 bit and a char only 8 then this sharing of data would work in this case.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
865ac55f449ec32694ba7760a025ce93f230e16d
Short
0
182
1009
1008
2013-04-15T14:27:17Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
Short
== Semantics ==
A single whole, 16 bit, number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Short;
};
In this example variable ''i'' is explicitly declared to be of type ''Short''.
''Since: Version 1.0''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
48db9041d021682ecc620a1978233cbb4c48060b
Single
0
65
352
351
2013-04-15T14:27:31Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
single[type]
single[on[process]]
where ''type'' is optional
== Semantics ==
Will allocate a variable to a specific process. Most commonly combined with the ''on'' type which specifies the process to allocated to, but not required if this can be inferred. Additionally the programmer will place a distribution type within ''single'' if dealing with distributed arrays.
== Example ==
function void main() {
var i:Int :: allocated[single[on[1]]];
};
In this example variable ''i'' is declared as an integer and allocated on process 1.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
6e74dbec9bd0f6e55312f76ea5613a2cb312e5b4
Stack
0
184
1018
1017
2013-04-15T14:27:47Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
function void main() {
var i:Int :: allocated[stack];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
f6693fc301e6aa97a613855f215ad03695868192
Standard
0
86
477
476
2013-04-15T14:28:03Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
};
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
594fcde910d32d7bd6e0003296ff56446dd17c9d
Static
0
186
1031
1030
2013-04-15T14:28:19Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
function void main() {
var i:Int :: allocated[static];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
73ceadc619419c5965d3c2c7e39c99da668c2558
String
0
51
290
289
2013-04-15T14:28:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters. All strings are immutable, concatenation of strings will in fact create a new string.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:String;
var p:="Hello World!";
};
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
7ab2bc8ea1834a195f690040b72929215f16644e
Synchronous
0
89
497
496
2013-04-15T14:28:50Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
};
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
2828c60e03ad41895edf8f33973bce097fd1e6f2
Acos
0
192
1056
1055
2013-04-15T14:29:46Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse cosine of
* '''Returns:''' A [[Double]] representing the inverse cosine
== Example ==
#include <maths>
function void main() {
var d:=acos(10.4);
var y:=acos(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
1336caa03f2d3d19eee0c2d80702352b5ef43fbd
1057
1056
2013-04-15T14:32:47Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse cosine of
* '''Returns:''' A [[Double]] representing the inverse cosine
== Example ==
#include <maths>
function void main() {
var d:=acos(10.4);
var y:=acos(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
e7ca8b4dffeb65f5987cb0d86289f816ad66ef5c
Asin
0
193
1062
1061
2013-04-15T14:30:01Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse sine of
* '''Returns:''' A [[Double]] representing the inverse sine
== Example ==
#include <maths>
function void main() {
var d:=asin(23);
var y:=asin(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
15e5ae674252a098c93f781c18b5fe88af2d285d
1063
1062
2013-04-15T14:32:33Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse sine of
* '''Returns:''' A [[Double]] representing the inverse sine
== Example ==
#include <maths>
function void main() {
var d:=asin(23);
var y:=asin(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
ca2ee53e5aac063485d2a3761ae262f6ce52f14b
Atan
0
194
1068
1067
2013-04-15T14:30:13Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse tangent of
* '''Returns:''' A [[Double]] representing the inverse tangent
== Example ==
#include <maths>
function void main() {
var d:=atan(876.3);
var y:=atan(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
c6df3221595f19164dca3bc588213d4185c3df09
1069
1068
2013-04-15T14:32:26Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse tangent of
* '''Returns:''' A [[Double]] representing the inverse tangent
== Example ==
#include <maths>
function void main() {
var d:=atan(876.3);
var y:=atan(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
110a6e4c40e637fb0021356dece79e5a2086df0f
Ceil
0
198
1088
1087
2013-04-15T14:30:28Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This ceil(d) function will find the smallest integer greater than or equal to ''d''.
* '''Pass:''' A [[Double]] to find the ceil of
* '''Returns:''' An [[Int]] representing the ceiling
== Example ==
#include <maths>
function void main() {
var a:=ceil(10.5);
var y:=ceil(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
0803986ed5fdbbfc057949d6fe48aec049dd47ab
1089
1088
2013-04-15T14:32:17Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This ceil(d) function will find the smallest integer greater than or equal to ''d''.
* '''Pass:''' A [[Double]] to find the ceil of
* '''Returns:''' An [[Int]] representing the ceiling
== Example ==
#include <maths>
function void main() {
var a:=ceil(10.5);
var y:=ceil(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
ca7f759657ced14b3d68ea3874f9fe15f55687ca
Charat
0
124
680
679
2013-04-15T14:30:43Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This charat(s,n) function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A [[String]] and [[Int]]
* '''Returns:''' A [[Char]]
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=charat(a,2);
var d:=charat("test",0);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
a178f28afbb99604441c5718781d3436749a056f
681
680
2013-04-15T14:32:09Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This charat(s,n) function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A [[String]] and [[Int]]
* '''Returns:''' A [[Char]]
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=charat(a,2);
var d:=charat("test",0);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
bd7a355f15778415e2fde11942f7a99ee90a8a5c
Close
0
201
1102
1101
2013-04-15T14:30:56Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The close(f) function will close the file represented by handle ''f''
* '''Pass:''' A [[File]] handle
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("myfile.txt","r");
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
623fc6fda671ceb588272ede16f55d6b894cb6ee
1103
1102
2013-04-15T14:32:00Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The close(f) function will close the file represented by handle ''f''
* '''Pass:''' A [[File]] handle
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("myfile.txt","r");
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
ecc7c40b6f4c9193d8dd13baf2b38663f6bd305d
Complex
0
200
1097
1096
2013-04-15T14:31:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[record]] type with both components as doubles.
== Example ==
#include <maths>
function void main() {
var a:complex;
a.i:=19.65;
a.r:=23.44;
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
9ecd2a45aaf003b9aa72b7f9df3235737db5f818
1098
1097
2013-04-15T14:31:52Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[record]] type with both components as doubles.
== Example ==
#include <maths>
function void main() {
var a:complex;
a.i:=19.65;
a.r:=23.44;
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
1d02817d5ee922340f5ebbed4d0796f7df3015a9
Cos
0
108
593
592
2013-04-15T14:31:28Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This cos(d) function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find cosine of
* '''Returns:''' A [[Double]] representing the cosine
== Example ==
#include <maths>
function void main() {
var a:=cos(10.4);
var y:=cos(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
2004d07102bd926cb9cc5206d040163454bf58e2
Cosh
0
195
1074
1073
2013-04-15T14:31:43Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic cosine of
* '''Returns:''' A [[Double]] representing the hyperbolic cosine
== Example ==
#include <maths>
function void main() {
var d:=cosh(10.4);
var y:=cosh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
285dfd293f100de431db1ccafc6a7a8a938b3b4c
Dtostring
0
206
1124
1123
2013-04-15T14:33:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The dtostring(d, a) function will convert the variable or value ''d'' into a string using the formatting supplied in ''a''.
* '''Pass:''' A [[Double]] and [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:=23.4352;
var c:=dtostring(a, "%.2f");
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
b5552df822b385b1c05b0ccee8c112db3f006998
Exp
0
239
1330
1329
2013-04-15T14:33:26Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This exp(x) function will return the exponent of ''x'' (e to the power of ''x'').
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the exponent
== Example ==
#include <maths>
function void main() {
var a:=exp(23.4);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
5b1c383b25ca0b99218b7ff4203776b37ebf14c5
Findchar
0
241
1335
1334
2013-04-15T14:33:40Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This findchar(s, c) function will return the index of the first occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findchar(a,'l');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
c27bb1368a0c7d0a9c08293b91676cc2ce9a1196
Findrchar
0
242
1338
1337
2013-04-15T14:33:55Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This findrchar(s, c) function will return the index of the last occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findrchar(a,'l');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
d57a4e8ea32daff716b515558dee3f6cbad338a7
Findstr
0
243
1341
1340
2013-04-15T14:34:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This findstr(s, s2) function will return the index of the first occurrence of search string ''s2'' in text string ''s''.
* '''Pass:''' Two [[String|Strings]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findstr(a,'el');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
8099c46bfc158c7e371111e9ba241e8125e6ab25
Floor
0
109
598
597
2013-04-15T14:34:25Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This floor(d) function will find the largest integer less than or equal to ''d''.
* '''Pass:''' A [[Double]] to find floor of
* '''Returns:''' An [[Int]] representing the floor
== Example ==
#include <maths>
function void main() {
var a:=floor(10.5);
var y:=floor(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
a1f40f5f8327abe46dfefea992816c1d2a3181cd
Getprime
0
110
603
602
2013-04-15T14:34:45Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This getprime(n) function will find the ''n''th prime number.
* '''Pass:''' An [[Int]]
* '''Returns:''' An [[Int]] representing the prime
== Example ==
#include <maths>
function void main() {
var a:=getprime(10);
var y:=getprime(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
72563debc31be6a39bdb903f7c4a797d537529b6
Input
0
118
649
648
2013-04-15T14:35:01Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This input(i) function will prompt the user for input via stdin, the result being placed into ''i''
* '''Pass:''' A variable for the input to be written into, of type [[String]]
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:String;
input(f);
print("You wrote: "+f+"\n");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
efadb447b7496688629c4a02ea7cc538c64e6296
Itostring
0
205
1120
1119
2013-04-15T14:35:14Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The itostring(n) function will convert the variable or value ''n'' into a string.
* '''Pass:''' An [[Int]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:=234;
var c:=itostring(a);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
a801bb61bb1b30a65cdb27eb72174c5316d9d306
Log
0
111
610
609
2013-04-15T14:35:27Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the natural logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the logarithmic value
== Example ==
#include <maths>
function void main() {
var a:=log(10.54);
var y:=log(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
260900e77f6d0766001d0fccafbe7e21e636b685
Log10
0
199
1093
1092
2013-04-15T14:35:39Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the base 10 logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the base 10 logarithmic value
== Example ==
#include <maths>
function void main() {
var a:=log10(0.154);
var y:=log10(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
a3be85eeeb434e2934290a031224406429310522
Lowercase
0
125
687
686
2013-04-15T14:35:52Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This lowercase(s) function will return the lower case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:="HeLlO";
var c:=lowercase(a);
var d:=lowercase("TeST");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
c46b6bd33d89eca359411a0b7cb1d3d89fb71fa5
Mod
0
112
615
614
2013-04-15T14:36:05Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This mod(n,x) function will divide ''n'' by ''x'' and return the remainder.
* '''Pass:''' Two integers
* '''Returns:''' An integer representing the remainder
== Example ==
#include <maths>
function void main() {
var a:=mod(7,2);
var y:=mod(a,a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
596158b28e9add95119049e4ee4a43f7810c9ad8
Open
0
202
1107
1106
2013-04-15T14:36:19Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This open(n,a) function will open the file of name ''n'' with mode of ''a''.
* '''Pass:''' The name of the file to open of type [[String]] and mode of type [[String]]
* '''Returns:''' A file handle of type [[File]]
== Example ==
#include <io>
function void main() {
var f:=open("myfile.txt","r");
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
37dcc748ba2a4854d15fc176a7249151b73b0592
Oscli
0
133
724
723
2013-04-15T14:36:42Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This oscli(a) function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution.
* '''Pass:''' A [[String]] representing the command
* '''Returns:''' Nothing
* '''Throws:''' The error string ''oscli'' if the operating system returns an error to this call
== Example ==
#include <io>
#include <system>
function void main() {
var a:String;
input(a);
try {
oscli(a);
} catch ("oscli") {
print("Error in executing command\n");
};
};
The above program is a simple interface, allowing the user to input a command and then passing this to the OS for execution. The ''oscli'' call is wrapped in a try-catch block which will detect when the user has request the run of an errornous command, this explicit error handling is entirely optional.
''Since: Version 0.5''
[[Category:Function Library]]
[[Category:System Functions]]
157bc855222f3afa62b1ecad06f38a0aff6c40b0
PI
0
113
621
620
2013-04-15T14:36:55Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pi() function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A [[Double]] representing PI
== Example ==
#include <maths>
function void main() {
var a:=pi();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
5380d2d50eccb8ee2c895d308484ad6efade625a
Pid
0
122
670
669
2013-04-15T14:37:11Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pid() function will return the current processes' ID number.
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the current process ID
== Example ==
#include <parallel>
function void main() {
var a:=pid();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
4df3b22f261b1137c0967d25404e15b0a280f0c7
Pow
0
114
627
626
2013-04-15T14:37:25Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This pow(n,x) function will return ''n'' to the power of ''x''.
* '''Pass:''' Two [[Int|Ints]]
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths>
function void main() {
var a:=pow(2,8);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
13f5fa88a084da5eab6479c3725c8117c3857d6a
Print
0
119
654
653
2013-04-15T14:37:40Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This print(n) function will write a variable of value ''n'' to stdout.
* '''Pass:''' A [[String]] typed variable or value
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:="Hello";
print(f+" world\n");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
c5d3ebfe96d7748fac20a332ed1cc95dba18bf95
Processes
0
123
675
674
2013-04-15T14:37:59Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This processes() function will return the number of processes
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the number of processes
== Example ==
#include <parallel>
function void main() {
var a:=processes();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
2ac7efcb08254df1e32445bbd0313562793d405e
Randomnumber
0
115
632
631
2013-04-15T14:38:13Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This randomnumber(n,x) function will return a random number between ''n'' and ''x''.
''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and in this case a floating point number is found.''
* '''Pass:''' Two [[Int|Ints]] defining the bounds of the random number
* '''Returns:''' A [[Double]] representing the random number
== Example ==
#include <maths>
function void main() {
var a:=randomnumber(10,20);
var b:=randomnumber(0,1);
};
In this case, ''a'' is a whole number between 10 and 20, whereas ''b'' is a decimal number.
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1bb2cbf3fac50d477f062d74f1ad04f2cc0c9141
Readchar
0
120
660
659
2013-04-15T14:38:27Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readchar(f) function will read a character from a file with handle ''f''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read character from
* '''Returns:''' A character from the file type [[Char]]
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","r");
var u:=readchar(f);
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
29873925e47663b06fc6fe02d0542541ee129877
Readline
0
121
665
664
2013-04-15T14:38:41Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This readline(f) function will read a line (delimited by the new line character) from a file with handle ''f''. The file handle maintains its position in the file, so after a call to readline the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read the line from
* '''Returns:''' A line of the file type [[String]]
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","r");
var u:=readline(f);
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
4ea7b88528ae2b22863940fba861bee7a2f1a1ff
Sin
0
190
1044
1043
2013-04-15T14:38:57Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sin(d) function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find sine of
* '''Returns:''' A [[Double]] representing the sine
== Example ==
#include <maths>
function void main() {
var a:=sin(98.54);
var y:=sin(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1bf701a3975874ef8d7b79f93cad35e9ce4db53a
Sinh
0
196
1079
1078
2013-04-15T14:39:10Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic sine of
* '''Returns:''' A [[Double]] representing the hyperbolic sine
== Example ==
#include <maths>
function void main() {
var d:=sinh(0.4);
var y:=sinh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
a8ab9d56598ae9b404186dcbc44c07e9d590a3ae
Sqr
0
116
638
637
2013-04-15T14:39:22Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqr(d) function will return the result of squaring ''d''.
* '''Pass:''' A [[Double]] to square
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths.h>
function void main() {
var a:=sqr(3.45);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
f1106c368ec367c719727c32704259f8abc135b0
Sqrt
0
117
643
642
2013-04-15T14:39:33Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This sqrt(d) function will return the result of square rooting ''d''.
* '''Pass:''' An [[Double]] to find square root of
* '''Returns:''' A [[Double]] which is the square root
== Example ==
#include <maths>
function void main() {
var a:=sqrt(8.3);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1d3b50879f14cddf97f36f9892bb5b9df2d2874f
Strlen
0
126
692
691
2013-04-15T14:39:46Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This strlen(s) function will return the length of string ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=strlen(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
a18daa62766c394f31f8f169be32f01ebe7ad013
Substring
0
127
697
696
2013-04-15T14:39:59Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This substring(s,n,x) function will return the string at the position between ''n'' and ''x'' of ''s''.
* '''Pass:''' A [[String]] and two [[Int|Ints]]
* '''Returns:''' A [[String]] which is a subset of the string passed into it
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=substring(a,2,4);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
0863a8f9cac73fe5b61378d1e114209d19bb3861
Tan
0
191
1050
1049
2013-04-15T14:40:12Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This tan(d) function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the tangent of
* '''Returns:''' A [[Double]] representing the tangent
== Example ==
#include <maths>
function void main() {
var a:=tan(0.05);
var y:=tan(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
d67f1b0fc6a1f729c22f6eb54c1fd4d62b82fc25
Tanh
0
197
1084
1083
2013-04-15T14:40:27Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic tangent of
* '''Returns:''' A [[Double]] representing the hyperbolic tangent
== Example ==
#include <maths>
function void main() {
var d:=tanh(10.4);
var y:=tanh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
9f45406098a6bd8a6a89929c6462917eed3e95ca
Toint
0
128
702
701
2013-04-15T14:40:41Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This toint(s) function will convert the string ''s'' into an integer.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
function void main() {
var a:="234";
var c:=toint(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
10b405c943ba3a1c59943f5ff7177c6824026e5f
Uppercase
0
129
707
706
2013-04-15T14:41:03Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This uppercase(s) function will return the upper case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:="HeLlO";
var c:=uppercase(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
f4673a67eac2ecfaa17a6b02dc376dcad03dd3d2
Writebinary
0
204
1116
1115
2013-04-15T14:41:16Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This writebinary(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[Int]] variable or value to write into the file in a binary manner
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","w");
writebinary(f,127);
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
067db57756ce74a273bc21e9256cbdce6328264c
Writestring
0
203
1112
1111
2013-04-15T14:41:32Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This writestring(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[String]] to write
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","w");
writestring(f,"hello - test");
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
a225f241d485ee11815b9de22d16963d5af7727a
Oubliette
0
176
952
951
2013-04-22T11:22:35Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
c61602991af16bf5947e6f6e07dd34e4c6e83ee9
953
952
2013-05-03T12:24:52Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* One dimension partition number and dimensions can be decided at runtime
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
068f883a35d0f9423c2c13abd4427c5604a927f3
954
953
2013-05-03T14:54:02Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* One dimension partition number and dimensions can be decided at runtime
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
b112c3ebfd4856ef3c84dee1194f8745ed560284
955
954
2013-05-09T13:16:21Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* One dimension partition number and dimensions can be decided at runtime
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
77e9f0feefc8d3b542709c1aac1d1b9f0a36842e
956
955
2013-05-09T13:16:51Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
ae6aee531cc69513ba6ff3caaae3826ef37ac76a
957
956
2013-05-10T16:25:21Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
f791bc1c6d3f7a8516bc8c435bf4af93eb1884ab
958
957
2013-05-10T17:58:59Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
b039adc4576281f1f2752180ea074a767358d4aa
959
958
2013-05-11T16:22:58Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
5a8d050eca998880be1acf7c328fc5d43a806567
Template:News
10
209
1141
1140
2013-04-22T11:25:52Z
Polas
1
wikitext
text/x-wiki
* Mesham at the Exascale Applications and Software Conference (EASC 2013), further details and slides [http://www.easc2013.org.uk/abstracts#talk6_1 here]
* Specification version 1.0a4 released [http://www.mesham.com/downloads/specification1a4.pdf here]
* Update to Mesham alpha release ''(1.0.0_299)'' available [[Download 1.0|here]]
891121774c0820fd677b24bfd692e0197bce2f0f
1142
1141
2013-05-20T12:08:30Z
Polas
1
wikitext
text/x-wiki
* Specification version 1.0a5 released [http://www.mesham.com/downloads/specification1a5.pdf here]
* Mesham at the Exascale Applications and Software Conference (EASC 2013), further details and slides [http://www.easc2013.org.uk/abstracts#talk6_1 here]
* Update to Mesham alpha release ''(1.0.0_299)'' available [[Download 1.0|here]]
a1ea5b9d758f98aeb9181b93dcbbf0f708b66633
Tutorial - Parallel Types
0
224
1240
1239
2013-05-01T12:53:33Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
[[Category:Tutorials|Parallel Types]]
40d92f74b03f463b9bfbbc161476c84f1dbb51a7
1241
1240
2013-05-11T16:33:49Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
== Eager one sided communication ==
Whilst normal one sided communications follow the Logic Of Global Synchrony (LOGS) model of shared memory communication and complete only when a synchronisation is issued, it is possible to override this default behaviour to complete communications at the point of issuing the assignment or access instead.
#include <io>
#include <string>
function void main() {
var i:Int::eageronesided::allocated[single[on[1]]];
proc 0 { i:=23; };
proc 1 { print(itostring(i)+"\n"); };
};
Compile and run this fragment, see that the value ''23'' has been set without any synchronisation. Now remove the eager bit of the [[Eageronesided|eager one sided type]] (or remove it altogether, remember [[onesided]] is the default communication) and see that, without a synchronisation the value is 0. You can add the [[Sync|sync]] keyword in after line 6 to complete the normal one sided call.
[[Category:Tutorials|Parallel Types]]
1ae2fdf701c8aa29564c7dca1435a68351c68459
Image processing
0
142
789
788
2013-05-03T12:07:15Z
Polas
1
/* Source Code */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
var filterThreshold:=10; // filtering threshold for high and low pass filters
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=-sin(topass);
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
data[i][j].i:=-(data[i][j].i / (n * n));
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
data[i0 + i1 + (increvec / 2)].r:= data[i0 + i1].r- f0;
data[i0 + i1 + (increvec / 2)].i:=data[i0 + i1].i - f1;
data[i0 + i1].r := data[i0 + i1].r + f0;
data[i0 + i1].i := data[i0 + i1].i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < filterThreshold) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > (255-filterThreshold)) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j) * highpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j) * highpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here] there is also a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/fftimage-0.5.zip here]
There is also a simplified FFT code available [http://www.mesham.com/downloads/fft.mesh here] which the imaging processing was based upon.
[[Category:Example Codes]]
5cad87140116b223ef7c5106746da2d009daa3db
790
789
2013-05-03T12:14:49Z
Polas
1
/* Download */
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
var filterThreshold:=10; // filtering threshold for high and low pass filters
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=-sin(topass);
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
data[i][j].i:=-(data[i][j].i / (n * n));
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
data[i0 + i1 + (increvec / 2)].r:= data[i0 + i1].r- f0;
data[i0 + i1 + (increvec / 2)].i:=data[i0 + i1].i - f1;
data[i0 + i1].r := data[i0 + i1].r + f0;
data[i0 + i1].i := data[i0 + i1].i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < filterThreshold) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > (255-filterThreshold)) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j) * highpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j) * highpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here] there is also a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/fftimage-0.5.zip here]
There is also a simplified FFT code available [http://www.mesham.com/downloads/fft.mesh here] which the imaging processing was based upon and a version which can be run with any number of processes decided at runtime [http://www.mesham.com/downloads/fft-dynamic.mesh here].
[[Category:Example Codes]]
6a4c11dadc22fcc76c9e6413b31fa9a0826c12eb
Tutorial - Hello world
0
214
1172
1171
2013-05-03T13:10:48Z
Polas
1
/* Group process selection */
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
'''Tutorial number one''' - [[Tutorial_-_Simple_Types|next]]
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
But, isn't it a bit annoying having to type in each individual process id into a group statement? That is why we support the texas range (...) in a group to mean the entire range from one numeric to another.
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,...,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
The above code is pretty much the same as the one before (and should produce the same output) - but see how we have saved ourselves some typing by using the texas range in the group process list. This is especially useful when we are specifying very large ranges of processes but has a number of limits. Firstly the texas range must be between two process ids (it can not appear first or last in the list) and secondly the range must go upwards; so you can not specify the id on the left to be larger or equal to the id on the right.
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
4ddce3d7f2af8f0bb70ba5a0b468a8caa6c54b01
Operators
0
43
244
243
2013-05-03T14:52:18Z
Polas
1
/* Operators */
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#?: Inline if operator
#|| Logical OR
#&& Logical AND
#+= Plus assignment
#-= Subtraction assignment
#*= Multiplication assignment
#/= Division assignment
#%= Modulus assignment
[[Category:Core Mesham]]
f610fcd23d39cf9915dc781c7ce718790fa69a18
245
244
2013-05-03T14:53:11Z
Polas
1
/* Operators */
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#?: Inline if operator
#|| Logical OR
#&& Logical AND
#+= Plus assignment
#-= Subtraction assignment
#<nowiki>*</nowiki>= Multiplication assignment
#/= Division assignment
#%= Modulus assignment
[[Category:Core Mesham]]
b9d7968b6230597f970833592500b0cb1115bcc9
Arraydist
0
245
1345
2013-05-09T15:42:06Z
Polas
1
Created page with '== Syntax == arraydist[integer array] == Semantics == Will distribute data blocks amongst the processes dependant on the integer array supplied. The number of elements in this…'
wikitext
text/x-wiki
== Syntax ==
arraydist[integer array]
== Semantics ==
Will distribute data blocks amongst the processes dependant on the integer array supplied. The number of elements in this array must equal the number of blocks. The index of each element corresponds to the block Id and the value at this location the process that it resides upon. For example, the value 5 at location 2 will place block number 2 onto process 5.
== Example ==
function void main() {
var d:array[Int,4];
d[0]:=3;
d[1]:=0;
d[2]:=2;
d[3]:=1;
var a:array[Int,16,16] :: allocated[horizontal[4] :: single[arraydist[d]]];
var b:array[Int,16,16] :: allocated[single[on[1]]];
a:=b;
};
In this example the array is split using horizontal partitioning into 4 blocks, the first block held on process 3, the second on process 0, third on process 2 and lastly the fourth on process 1. In the assignment on line 10 the data in array ''b'' is distributed to the correct blocks which are held on the appropriate processes depending on the array distribution. To change what data goes where one can simply modify the values in array ''d''.
''Since: Version 1.0''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
61b333bed902219d29da647466f1f5928bc43884
Horizontal
0
90
506
505
2013-05-10T16:32:33Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
== Dot operators ==
Horizontal blocks also support ''.high'' and ''.low'', which will return the top and bottom bounds of the block
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Dot operation
! Semantics
|-
| high
| Largest global coordinate wrt a block in specific block dimension
|-
| low
| Smallest global coordinate wrt a block in specific block dimension
|-
| top
| Largest global coordinate in specific block dimension
|-
| localblocks
| Number of blocks held on local process
|-
| localblockid[i]
| Id number of ith local block
|}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
5810cf334df1752bb1008f8009fe095b85af6123
507
506
2013-05-10T16:32:48Z
Polas
1
/* Dot operators */
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
== Dot operators ==
Horizontal blocks also support a variety of dot operators to provide meta data
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Dot operation
! Semantics
|-
| high
| Largest global coordinate wrt a block in specific block dimension
|-
| low
| Smallest global coordinate wrt a block in specific block dimension
|-
| top
| Largest global coordinate in specific block dimension
|-
| localblocks
| Number of blocks held on local process
|-
| localblockid[i]
| Id number of ith local block
|}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
fac5c90b7d0f65cb340e36a186f294fe434915d3
508
507
2013-05-10T16:34:20Z
Polas
1
/* Dot operators */
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
== Dot operators ==
Horizontal blocks also support a variety of dot operators to provide meta data
{{OneDimPartitionDotOperators}}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
db149787531c63052a3c2907243f1f28d6a14eaa
509
508
2013-05-10T16:35:46Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
{{OneDimPartitionCommunication}}
== Dot operators ==
Horizontal blocks also support a variety of dot operators to provide meta data
{{OneDimPartitionDotOperators}}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
dd8e8fe91aba1b876e11458d10956bf81264b378
Template:OneDimPartitionDotOperators
10
246
1347
2013-05-10T16:33:48Z
Polas
1
Created page with '{| border="1" cellspacing="0" cellpadding="5" align="center" ! Dot operation ! Semantics |- | high | Largest global coordinate wrt a block in specific block dimension |- | low |…'
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Dot operation
! Semantics
|-
| high
| Largest global coordinate wrt a block in specific block dimension
|-
| low
| Smallest global coordinate wrt a block in specific block dimension
|-
| top
| Largest global coordinate in specific block dimension
|-
| localblocks
| Number of blocks held on local process
|-
| localblockid[i]
| Id number of ith local block
|}
6d326885ad7994242be475d9e3848cf090c30bb7
Template:OneDimPartitionCommunication
10
247
1349
2013-05-10T16:35:12Z
Polas
1
Created page with 'There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are …'
wikitext
text/x-wiki
There are a number of different default communication rules associated with the horizontal partition, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
e530fed33dd75c1ed2547b0063556ef3b836a457
1350
1349
2013-05-10T16:35:28Z
Polas
1
wikitext
text/x-wiki
There are a number of different default communication rules associated with the one dimensional partitions, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
af4b91ec71f70f0988bd43c7e4bd941480ae3318
Vertical
0
91
515
514
2013-05-10T16:36:00Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically. The figure below illustrates partitioning an array into 4 blocks vertically.
<center>[[Image:vert.jpg|Vertical Partition of an array into four blocks via type oriented programming]]</center>
== Communication ==
{{OneDimPartitionCommunication}}
== Dot operators ==
Vertical blocks also support a variety of dot operators to provide meta data
{{OneDimPartitionDotOperators}}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
ff782b0995c5f24e4f31410874d51a8ae4ddb72d
Eageronesided
0
248
1352
2013-05-11T16:17:20Z
Polas
1
Created page with '== Syntax == eageronesided[a,b] == Syntax == eageronesided[] == Semantics == Identical to the [[Onesided]] type, but will perform onesided communication rather than p2p. Thi…'
wikitext
text/x-wiki
== Syntax ==
eageronesided[a,b]
== Syntax ==
eageronesided[]
== Semantics ==
Identical to the [[Onesided]] type, but will perform onesided communication rather than p2p. This form of one sided communication is similar to normal [[Onesided|one sided]] communication but remote memory access happens immediately and is not linked to the synchronisation keyword. By virtue of the fact that RMA access happens immediately means this form of communication is potentially less performant than normal one sided.
== Example ==
function void main() {
var i:Int::eageronesided::allocated[single[on[2]]];
proc 0 {i:=34;};
};
In the above code example variable ''i'' is declared to be an Integer using eager onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two immediately and that value is now available after that line to every other process.
''Since: Version 1.0''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
628fe61159f9ccc4aa4db25d4f8f871b09dd72e9
Specification
0
177
977
976
2013-05-20T12:07:45Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Specification 1.0a_5|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham language specification|url=http://www.mesham.com|image=Spec.png|version=1.0a_5|released=May 2013}}
''The latest version of the Mesham language specification is 1.0a_5''
== Version 1.0a_5 - May 2013 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_5 is available for download. This version was released May 2013 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a5.pdf this latest version here]
a2e59a5a6f9d0643b060a83f09313d3c79dd73e8
Download 1.0
0
229
1287
1286
2013-05-20T14:16:48Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_356|released=May 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_356 released 20th May 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.02 released 20th May 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
10979eadae49a55e74f3977d8b95bed20465f5b0
1288
1287
2013-08-16T16:02:19Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_411|released=August 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_411 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Experimental thread based runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtlthreads64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtlthreads32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
5c678dc781a98dededc6bf09c6ba8b6889c89550
1289
1288
2013-08-16T16:09:34Z
Polas
1
/* Prerequisites */
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_411|released=August 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_411 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Experimental thread based runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtlthreads64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtlthreads32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 2) and a C compiler. We suggest '''MPICH-2''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
If you are using the experimental thread based runtime library then MPI is not required, the thread based RTL uses pthreads which is usually already installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
527cc1a7febe89ffa0d5643a2512d7063676ddef
Oubliette
0
176
960
959
2013-05-20T14:17:23Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 356 (May 2013) ===
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* First alpha release of the oubliette compliler
39ad4e87550fabc275e7bb734d7c65d14a65843a
961
960
2013-05-20T14:20:27Z
Polas
1
/* Update history */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
a5c6e26102b3f1aeb074bd11023fc0fef482d093
962
961
2013-07-01T14:14:34Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Updated proc, par and group semantics to be none blocking on entry to the blocks
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
8267e50ea2f5a33b0e27777d52fc5f75acb3554b
963
962
2013-07-01T14:48:23Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
29584db846350fb6008e88fae3e2a96da6bd6cb5
964
963
2013-07-26T11:33:53Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
* Abstracted all communications into a lower level communications layer
* Additional version and requirements reporting in the resulting executable
* Heap default bug fix for reference records
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
5d7e9827606622710b574733caae001d4b0e915b
965
964
2013-08-12T12:49:19Z
Polas
1
/* Latest (to be released) */
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
* Abstracted all communications into a lower level communications layer
* Additional version and requirements reporting in the resulting executable
* Heap default bug fix for reference records
* Threading support added which allows for virtual processors to be lightweight threads rather than processes
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
936240527dde8d59fabb178828c626cf9e0b1367
966
965
2013-08-16T16:03:02Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 411 (August 2013) ===
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
* Abstracted all communications into a lower level communications layer
* Additional version and requirements reporting in the resulting executable
* Heap default bug fix for reference records
* Threading support added which allows for virtual processors to be lightweight threads rather than processes
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
45608a91fa0ccecd3a353c9b987925649ce12e41
967
966
2013-08-16T16:03:20Z
Polas
1
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 411 (August 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a6.pdf specification 1a6]
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
* Abstracted all communications into a lower level communications layer
* Additional version and requirements reporting in the resulting executable
* Heap default bug fix for reference records
* Threading support added which allows for virtual processors to be lightweight threads rather than processes
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
4a36fbe5b496c73b4fd76247163a8eb4ee1e2478
Template:Downloads
10
11
64
63
2013-05-20T14:17:59Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_356'')]]
*[[Download_rtl_1.0|Runtime library 1.0.02]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
be14823ece014afccd754639f99c8995703c2595
65
64
2013-08-16T16:09:55Z
Polas
1
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_411'')]]
*[[Download_rtl_1.0|Runtime library 1.0.03]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
0a82304e9ab76590a50013e401fb38aaaf342dd3
Download rtl 1.0
0
232
1303
1302
2013-05-20T14:18:21Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.02|released=May 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
5caefa116866c2c05d6ae46b40500dd7d7d8bb17
1304
1303
2013-08-16T16:08:09Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.03|released=August 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Experimental thread based ==
We have created an experimental thread based RTL, where all the programmers parallel processes are represented as threads and all communication implemented using shared memory. By running inside threads, rather than separate processes, this has the benefits of reduced overhead on multi-core machines and no need for an MPI implementation to be installed. Threading is achieved via the pthreads library which is readily available on Linux versions. Your code should run without modification and all of the example code on this website, including the tutorials, have been tested and found to work in the threading mode.
The thread based runtime library can be downloaded, '''[http://www.mesham.com/downloads/rtlthreads64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtlthreads32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
1a283a0d3d9621a6a043ea8ec772471d806ba46f
Template:News
10
209
1143
1142
2013-05-20T14:19:10Z
Polas
1
wikitext
text/x-wiki
* Update to Mesham alpha release ''(1.0.0_356)'' available [[Download 1.0|here]]
* Specification version 1.0a5 released [http://www.mesham.com/downloads/specification1a5.pdf here]
* Mesham at the Exascale Applications and Software Conference (EASC 2013), further details and slides [http://www.easc2013.org.uk/abstracts#talk6_1 here]
364e6aced1680f5c977168d848e507a699b12d4c
1144
1143
2013-08-16T15:47:25Z
Polas
1
wikitext
text/x-wiki
* Specification version 1.0a6 released [http://www.mesham.com/downloads/specification1a6.pdf here]
* Update to Mesham alpha release ''(1.0.0_356)'' available [[Download 1.0|here]]
* Mesham at the Exascale Applications and Software Conference (EASC 2013), further details and slides [http://www.easc2013.org.uk/abstracts#talk6_1 here]
2db6bbcc14087d62003371b60743f476c6dc818a
1145
1144
2013-11-05T17:13:15Z
Polas
1
wikitext
text/x-wiki
* Mesham at the PGAS 2013 conference, paper downloadable [http://www.pgas2013.org.uk/sites/default/files/finalpapers/Day2/R5/1_paper12.pdf here]
* Specification version 1.0a6 released [http://www.mesham.com/downloads/specification1a6.pdf here]
* Update to Mesham alpha release ''(1.0.0_356)'' available [[Download 1.0|here]]
b5996191fe1756c0de53148f8f395234ef3f7a91
1146
1145
2013-11-05T17:14:25Z
Polas
1
wikitext
text/x-wiki
* Mesham at the PGAS 2013 conference, paper downloadable [http://www.pgas2013.org.uk/sites/default/files/finalpapers/Day2/R5/1_paper12.pdf here]
* Specification version 1.0a6 released [http://www.mesham.com/downloads/specification1a6.pdf here]
* Update to Mesham alpha release ''(1.0.0_411)'' available [[Download 1.0|here]]
16d937c0aba3a573746d0588ab0eb726748f7668
Tutorial - Parallel Types
0
224
1242
1241
2013-06-12T14:38:51Z
Polas
1
/* Eager one sided communication */
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
== Eager one sided communication ==
Whilst normal one sided communications follow the Logic Of Global Synchrony (LOGS) model of shared memory communication and complete only when a synchronisation is issued, it is possible to override this default behaviour to complete communications at the point of issuing the assignment or access instead.
#include <io>
#include <string>
function void main() {
var i:Int::eageronesided::allocated[single[on[1]]];
proc 0 { i:=23; };
sync;
proc 1 { print(itostring(i)+"\n"); };
};
Compile and run this fragment, see that the value ''23'' has been set without any explicit synchronisation on variable ''i''. Now remove the eager bit of the [[Eageronesided|eager one sided type]] (or remove it altogether, remember [[onesided]] is the default communication) and see that, without a synchronisation the value is 0. You can add the [[Sync|sync]] keyword in after line 6 to complete the normal one sided call. We require a synchronisation between the proc calls here to ensure that process 1 does not complete before 0 which sets the value.
[[Category:Tutorials|Parallel Types]]
d77cb9304855c7a7af40589a701d4ffc96f995ec
Group
0
181
1004
1003
2013-07-01T14:13:15Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks; values, variables or texas range (with limits) known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loop, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' Texas range of ... is supported, although this can only be between values (specifies a range) and the previous value must be smaller than or equal to the following one.
== Example ==
#include <io>
function void main() {
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
group 1,...,3,5,...,8 {
print("Hello world from pid 1, 2, 3, 5, 6, 7 or 8\n");
};
};
The code fragment will involve 9 processes (0 to 8 inclusive.) Only process zero and process three will display the first message and the second is displayed by more as described by the texas range.
''Since: Version 1.0''
[[Category:Parallel]]
579a2c9fc2fb20c2854e2eacd859867573d26b72
Par
0
39
219
218
2013-07-01T14:13:35Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.
== Example ==
#include <io>
function void main() {
var p;
par p from 0 to 9 {
print("Hello world\n");
};
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
''Since: Version 0.41b''
[[Category:Parallel]]
3908eb26930ae997d9c2525ae27e75341f634582
Proc
0
40
229
228
2013-07-01T14:13:51Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.
== Example ==
#include <io>
function void main() {
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
''Since: Version 0.41b''
[[Category:Parallel]]
75a24e7b06d099010a8d14a6f8188a48c65f9f37
Sleep
0
249
1354
2013-07-01T14:47:41Z
Polas
1
Created page with '== Overview == This sleep(l) function will pause execution for ''l'' milliseconds. * '''Pass:''' A [[Long]] number of milliseconds to sleep for * '''Returns:''' Nothing == Ex…'
wikitext
text/x-wiki
== Overview ==
This sleep(l) function will pause execution for ''l'' milliseconds.
* '''Pass:''' A [[Long]] number of milliseconds to sleep for
* '''Returns:''' Nothing
== Example ==
#include <system>
function void main() {
sleep(1000);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
0bc3a1aca52f1253f51a5b6fbc0c8a320332c02f
Tutorial - Advanced Types
0
238
1326
1325
2013-07-11T14:13:26Z
Polas
1
/* An example */
wikitext
text/x-wiki
<metadesc>Tutorial describing advanced type features of Mesham</metadesc>
'''Tutorial number nine''' - [[Tutorial_-_Dynamic Parallelism|prev]]
== Introduction ==
Mesham has a number of advanced typing features over and above type chains and type coercion. In this tutorial we will look at some of this, how they might be used and how they can simplify your program code.
== Type Variables ==
The language has a concept of a type variable, which is a, compilation time, programmer defined type representing a more complex type chain. Let's have a look at this in more detail via an example
function void main() {
typevar typeA::=Int::allocated[multiple];
typevar typeB::=String::allocated[single[on[3]]];
var a:typeA;
var b:typeB;
};
In this example we create type type variables called ''typeA'' and ''typeB'' which represent different type chains. Then the actual program variables ''a'' and ''b'' are declared using these type variables. Notice how type assignment is using the ''::='' operator rather than normal program variable assignment which folows '':=''.
function void main() {
typevar typeA::=Int::allocated[multiple];
var a:typeA;
typeA::=String;
var b:typeA;
typeA::=typeA::const;
var c:typeA;
};
This example demonstrates assigning types and chains to existing type variables. At lines two and three we declare the type variable ''typeA'' and use it in the declaration of program variable ''a''. However, then on line five we modify the value of the type variable, ''typeA'' using the ''::='' operator to be a [[String]] instead. Then on line six we declare variable ''b'' using this type variable, which effectively sets the type to be a String. Line eight demonstrates how we can use the type variable in type chain modification and variable ''c'' is a constant [[String]].
'''Note:''' It is important to appreciate that type variables exist only during compilation, they do not exist at runtime and as such can not be used in conditional statements.
== Types of program variables ==
Mesham provides some additional keywords to help manage and reference the type of program variables, however it is imperative to remember that these are static only i.e. only exist during compilation.
=== Currenttype ===
Mesham has an inbuilt [[Currenttype|currenttype]] keyword which will result in the current type chain of a program variable.
a:currenttype a :: const;
a:a::const
In this code snippet both lines of code are identical, they will set the type of program variable ''a'' to be the current type chain combined with the [[Const|const]] type. Note that using a program variable in a type chain such as in the snippet above is a syntactic short cut for the current type (using the [[Currenttype|currenttype]] keyword) and either can be used.
=== Declaredtype ===
It can sometimes be useful to reference or even revert back to the declared type of a program variable later on in execution. To do this we supply the [[Declaredtype|declaredtype]] keyword.
function void main() {
var a:Int;
a:a::const;
a:declaredtype a;
a:=23;
};
This code will compile and work fine because, although we are coercing the type of ''a'' to be that of the [[Const|const]] type at line three, on line four we are reverting the type to be the declared type of the program variable. If you are unsure about why this is the case, then move the assignment around to see when the code will not compile with it.
== An example ==
Type variables are commonly used with [[Record|records]] and [[Referencerecord|referencerecords]]. In fact, the [[Complex|complex]] type obtained from the [[:Category:Maths_Functions|maths library]] is in fact a type variable.
#include <string>
#include <io>
typevar node;
node::=referencerecord[Int, "data", node, "next"];
function void main() {
var i;
var root:node;
root:=null;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=root;
root:=newnode;
};
while (root != null) {
print(itostring(root.data)+"\n");
root:=root.next;
};
};
This code will build up a linked list of numbers and then walk it, displaying each number as it goes. Whilst it is a relatively simple code, it illustrates how one might use type variables to improve the readability of their code. One important point to note is a current limitation in the Mesham parser and that is the fact that we are forced to declare the type variable ''node'' on line four and then separately assign to it at line five. The reason for this is that in this assignment we are referencing back to the ''node'' type variable in the [[Referencerecord|referencerecord]] type and as such it must exist.
== Limitations ==
There are some important limitations to note about the current use of types. Types currently only exist explicitly during compilation - what this means is that you can not do things such as passing them into functions or communicating them. Additionally, once allocation information (the [[Allocated|allocated]] type) and its subtypes have been set then you can not modify this, nor can you change the [[:Category:Element_Types|element type]].
[[Category: Tutorials|Advanced Types]]
02b4fae459609f8f15e2c5cb98e4ea140c20feec
Tutorial - Dynamic Parallelism
0
237
1321
1320
2013-07-26T11:29:54Z
Polas
1
/* A more complex example */
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] :: [[Tutorial_-_Advanced Types|next]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''--mesham_p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
[[Category: Tutorials|Dynamic Parallelism]]
c62bb74df26fbc2e2bda8577810b1cadb1500971
The Compiler
0
225
1258
1257
2013-07-26T11:32:44Z
Polas
1
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_COMPILER_ARGS''' ''Optional arguments to supply to the C compiler, for instance optimisation flags''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
== Executable options ==
Once compiled, the resulting executable provides for a number of command line options which reports details of the runtime environment that it will operate under.
* '''--mesham_p''' ''Displays the minimum number of processes required to run the code''
* '''--mesham_c''' ''Summary information about the communications layer, such as MPI, being used to link the processes''
* '''--mesham_v''' ''Displays version information about the runtime library and also the compiled executable''
39e17ba5973ccdb5b038794d2687e0134219dac9
1259
1258
2013-08-02T13:10:15Z
Polas
1
/* Executable options */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_COMPILER_ARGS''' ''Optional arguments to supply to the C compiler, for instance optimisation flags''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
== Executable options ==
Once compiled, the resulting executable provides for a number of command line options which reports details of the runtime environment that it will operate under. These will only be checked if the executable is run with one process.
* '''--mesham_p''' ''Displays the minimum number of processes required to run the code''
* '''--mesham_c''' ''Summary information about the communications layer, such as MPI, being used to link the processes''
* '''--mesham_v''' ''Displays version information about the runtime library and also the compiled executable''
9cc86fccb500aeb02e242d6befa3b764814bcc5c
1260
1259
2014-01-02T19:01:56Z
Polas
1
/* Command line options */
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-g''' ''Produce executable that is debuggable with gdb and friends''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_COMPILER_ARGS''' ''Optional arguments to supply to the C compiler, for instance optimisation flags''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
== Executable options ==
Once compiled, the resulting executable provides for a number of command line options which reports details of the runtime environment that it will operate under. These will only be checked if the executable is run with one process.
* '''--mesham_p''' ''Displays the minimum number of processes required to run the code''
* '''--mesham_c''' ''Summary information about the communications layer, such as MPI, being used to link the processes''
* '''--mesham_v''' ''Displays version information about the runtime library and also the compiled executable''
e1bb073ab67ce984e4966a754e35cd809f0ebe80
Specification
0
177
978
977
2013-08-16T15:46:51Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Specification 1.0a_6|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham language specification|url=http://www.mesham.com|image=Spec.png|version=1.0a_6|released=August 2013}}
''The latest version of the Mesham language specification is 1.0a_6''
== Version 1.0a_6 - August 2013 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_6 is available for download. This version was released August 2013 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a6.pdf this latest version here]
549852b7dc57e9343f767d658301d7d087d44fb3
NAS-IS Benchmark
0
144
805
804
2013-08-16T15:51:18Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Download ==
You can download the entire code package [http://www.mesham.com/downloads/npb.zip here]
[[Category:Example Codes]]
86104fa4293e94a5fc6d3f2ff66468a41991c6f1
806
805
2013-08-16T15:51:53Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Download ==
You can download the entire code package for version 1 of the compiler [http://www.mesham.com/downloads/npb.zip here] and for previous 0.5 version [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Example Codes]]
109281f1a9210fdc7e73b3f4afdb7022cba95b5b
807
806
2013-08-16T15:52:15Z
Polas
1
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Download ==
You can download the entire code package for the current version of the compiler [http://www.mesham.com/downloads/npb.zip here] and for the older 0.5 version [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Example Codes]]
bd6d11c5a0f94b07a6c915928f8ac34d5d449811
Template:Examples
10
12
73
72
2013-08-16T15:54:35Z
Polas
1
wikitext
text/x-wiki
*Selected tutorials
**[[Tutorial - Hello world|Hello world]]
**[[Tutorial - Simple Types|Simple Types]]
**[[Tutorial - Functions|Functions]]
**[[Tutorial - Parallel Constructs|Parallel Constructs]]
**[[:Category:Tutorials|'''All tutorials''']]
*Selected codes
**[[Mandelbrot]]
**[[NAS-IS_Benchmark|NASA IS benchmark]]
**[[Image_processing|Image Processing]]
**[[Dartboard_PI|Dartboard method find PI]]
**[[:Category:Example Codes|'''All codes''']]
7c176074c644bfa475c4f660e42e3b707815293c
Idaho
0
233
1308
1307
2013-08-16T16:12:22Z
Polas
1
wikitext
text/x-wiki
<metadesc>Idaho is the Mesham runtime library</metadesc>
[[File:Runtimelibrary.png|right]]
== Introduction ==
Idaho is the name of the reengineered Mesham runtime library. We have always given parts of the language different nicknames and [[Oubliette]] is the name of the reengineered compiler that requires Idaho. The runtime library is used by a compiled executable whilst it is running and, apart from providing much of the lower level language functionality such as memory allocation, remote memory (communication) management and timing, it also provides the native functions which much of the standard function library requires.
We have designed the system in this manner such that platform specific behaviour can be contained within this library and the intention will be that a version of the library will exist for multiple platforms. Secondly by modifying the library it is possible to tune how the Mesham executables will run, such as changing the garbage collection strategy.
== Abstracting communication ==
All physical parallelism, including communication and process placement, is handled by the lowest level communication layer in the RTL. By changing this layer then we can support and optimise for multiple technologies. Implementations of this layer currently exist which support process based (MPI) parallelism and thread based (pthreads) parallelism. Currently this is delivered via downloading the appropriate runtime library files.
== API ==
The set of functions which Idaho provides can be viewed in the ''mesham.h'' header file. It is intended to release the source code when it is more mature.
9ff09577aa88bf9e5babbe53bdabb995eab90432
Operators
0
43
246
245
2013-12-20T13:04:34Z
Polas
1
/* Operators */
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#?: Inline if operator
#||| Logical short circuit OR
#&& Logical short circuit AND
#| Logical OR
#& Logical AND
#+= Plus assignment
#-= Subtraction assignment
#<nowiki>*</nowiki>= Multiplication assignment
#/= Division assignment
#%= Modulus assignment
[[Category:Core Mesham]]
a259ab2da783ce5d91abe55f46ce697bbe03ee9f
Tutorial - Simple Types
0
219
1197
1196
2016-10-18T12:03:24Z
Polas
1
/* Let's go parallel */
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts both as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync a;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
c5a3fcc183a7ef3164a5f8efb811f089e1b9cc98
1198
1197
2016-10-18T12:04:15Z
Polas
1
/* Further parallelism */
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts both as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
c640afcc6976434738239cdc292c0a7cbb1dee5b
Tutorial - Shared Memory
0
222
1220
1219
2016-10-18T12:09:10Z
Polas
1
/* Further communication */
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
};
sync ;
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there). We then [[Sync|synchronise]] all processes to ensure process zero has updated ''a'' and then process one will obtain the value of ''a'' from process zero and pop this into its own ''b'', it then completes all operations involving variable ''a'' and displays its value of ''b''. Stepping back a moment, what we are basically doing here is getting some remote data and copying this into a local variable, the result is that the value held by process zero in ''a'' will be retrieved into ''b'' on process one. If you remove the [[Sync|sync]] statement on line 10 then you might see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to ensure process zero has update ''a'' before process one reads from it, equally the last synchronisation statement completes RMA and if you remove this then likely the value in ''b'' will not have updated.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync a;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
[[Category:Tutorials|Shared Memory]]
f3a7ea429cf8cb129420d57a6486c2ede2cd78b1
1221
1220
2016-10-18T12:10:04Z
Polas
1
/* Single to single */
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
};
sync ;
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there). We then [[Sync|synchronise]] all processes to ensure process zero has updated ''a'' and then process one will obtain the value of ''a'' from process zero and pop this into its own ''b'', it then completes all operations involving variable ''a'' and displays its value of ''b''. Stepping back a moment, what we are basically doing here is getting some remote data and copying this into a local variable, the result is that the value held by process zero in ''a'' will be retrieved into ''b'' on process one. If you remove the [[Sync|sync]] statement on line 10 then you might see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to ensure process zero has update ''a'' before process one reads from it, equally the last synchronisation statement completes RMA and if you remove this then likely the value in ''b'' will not have updated.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all communication is based upon assignment, to illustrate this look at the following code
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int;
proc 0 {
b:=a;
};
};
If we compile this then we will get the error message ''Assignment must be visible to process 1'' which is because, as communication is assignment driven, process one (which contains ''a'') must drive this assignment and communication. To fix this you could change from process zero to process one doing the assignment at line 8 which would enable this code to compile correctly. It is planned in the future to extend the compiler to support this pull (as well as push) remote memory mechanism.
[[Category:Tutorials|Shared Memory]]
fe42496d651a9f466abb9468cc249d1ba2c64f50
1222
1221
2016-10-18T12:33:55Z
Polas
1
/* Limits of communication */
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
};
sync ;
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there). We then [[Sync|synchronise]] all processes to ensure process zero has updated ''a'' and then process one will obtain the value of ''a'' from process zero and pop this into its own ''b'', it then completes all operations involving variable ''a'' and displays its value of ''b''. Stepping back a moment, what we are basically doing here is getting some remote data and copying this into a local variable, the result is that the value held by process zero in ''a'' will be retrieved into ''b'' on process one. If you remove the [[Sync|sync]] statement on line 10 then you might see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to ensure process zero has update ''a'' before process one reads from it, equally the last synchronisation statement completes RMA and if you remove this then likely the value in ''b'' will not have updated.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
proc 1 {
a:=2;
};
sync a;
group 0, 2 {
print(itostring(a)+"\n");
};
};
The same thing will happen with [[Commgroup|communication groups]] too compile and run the following code, you will see that process one has written the value ''2'' into the memory of variable ''a'' which is held on processes zero and two.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all variables declared multiple (including communication groups) should be considered private, it is only variables declared single which can be accessed by another process.
[[Category:Tutorials|Shared Memory]]
8cb8fe6a6642154899165322b20bfb5206c536c8
1223
1222
2016-10-18T12:41:45Z
Polas
1
/* Further communication */
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
};
sync ;
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there). We then [[Sync|synchronise]] all processes to ensure process zero has updated ''a'' and then process one will obtain the value of ''a'' from process zero and pop this into its own ''b'', it then completes all operations involving variable ''a'' and displays its value of ''b''. Stepping back a moment, what we are basically doing here is getting some remote data and copying this into a local variable, the result is that the value held by process zero in ''a'' will be retrieved into ''b'' on process one. If you remove the [[Sync|sync]] statement on line 10 then you might see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to ensure process zero has update ''a'' before process one reads from it, equally the last synchronisation statement completes RMA and if you remove this then likely the value in ''b'' will not have updated.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
var b:Int::allocated[single[on[1]]];
group 0, 2 {
a:=2;
b:=a;
};
sync;
proc 1 {
print(itostring(b)+"\n");
};
};
The above illustrates a [[Commgroup|communication group]], as this has to be provided with [[Multiple]] the variable ''a'' is private to each process that it is allocated on. Here processes zero and two update their own (local) version of ''a'' and then remotely write to variable ''b'' held on process one, both processes will send values over but as these are the same then there is no conflict. [[Sync|synchronisation]] is used to complete the RMA and ensure process one awaits updates to its ''b'' which it then displays.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all variables declared multiple (including communication groups) should be considered private, it is only variables declared single which can be accessed by another process.
[[Category:Tutorials|Shared Memory]]
f6e81b749670b86f85f99bb159b073f3df2d7db7
Tutorial - Arrays
0
223
1234
1233
2016-10-18T12:12:25Z
Polas
1
/* Communication of arrays */
wikitext
text/x-wiki
<metadesc>Tutorial describing collecting data together via arrays in Mesham</metadesc>
'''Tutorial number seven''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, is it just a list of the value ''8'', not what you expected? Well in this example the values copied across may be any number between 0 and 8 because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this communication does not guarantee to complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
7104008125aeaa63712bf2ebf3ab8d69670c7bcf
Template:ElementTypeCommunication
10
46
260
259
2016-10-18T12:50:42Z
Polas
1
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| individual processes write values to process i
|-
| multiple[]
| single[on[i]]
| individual processes read values from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3 {
if (p==2) b:=p;
a:=b;
sync;
};
This code will result in each process reading the value of ''b'' from process 2 and then writing this into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''.
57e0c928a832e34d717ff78b4e430e51af45c747
261
260
2016-10-18T12:51:17Z
Polas
1
/* Communication Example */
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| individual processes write values to process i
|-
| multiple[]
| single[on[i]]
| individual processes read values from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3 {
if (p==2) b:=p;
a:=b;
sync;
};
This code will result in each process reading the value of ''b'' from process 2 and then writing this into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''. Note that communication groups are the same as multiple in this context and share the same semantics. All variables marked multiple are private to their containing process.
8e16a709a2e9cca763c10e3199f020e2ec9d2bda
Array
0
71
389
388
2016-10-18T12:52:24Z
Polas
1
/* Communication */
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| individual processes write values to process i
|-
| multiple[]
| single[on[i]]
| individual processes read values from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
function void main() {
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
};
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
254a5d47d7945fa88840a4d053a413f81238e9ac
Commgroup
0
64
347
346
2016-10-18T12:53:04Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist. All variables marked in this way are private to their local processes.
== Example ==
function void main() {
var i:Int :: allocated[multiple[commgroup[1,3]]];
};
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
42b4ba047e27696deecdee70c89e2b28bd85583e
Multiple
0
63
340
339
2016-10-18T12:53:40Z
Polas
1
/* Semantics */
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope. This sets the variable to be private (i.e. no other processes can view it) to its allocated process.
== Example ==
function void main() {
var i: Int :: allocated[multiple[]];
};
In this example the variable ''i'' is an integer, allocated to all processes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
bd9759d747a54e8e9bfb964a0ddf3d4a0e430ba0
LineNumber
0
250
1356
2016-10-18T14:54:00Z
Polas
1
Created page with '== Syntax == <nowiki>_LINE_NUMBER [sourcefile] == Semantics == Will be substituted in source code by the current line number of that specific file, this is useful for debuggin…'
wikitext
text/x-wiki
== Syntax ==
<nowiki>_LINE_NUMBER [sourcefile]
== Semantics ==
Will be substituted in source code by the current line number of that specific file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
72a2ae2d7d4a3608f79b4c04c0cbd84ca8b14649
1357
1356
2016-10-18T14:54:21Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
_LINE_NUMBER [sourcefile]
== Semantics ==
Will be substituted in source code by the current line number of that specific file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
e4b7f020fba9a2cb493edcbd0e836621d63520e4
LineNumber
0
250
1358
1357
2016-10-18T14:55:01Z
Polas
1
/* Syntax */
wikitext
text/x-wiki
== Syntax ==
_LINE_NUMBER
== Semantics ==
Will be substituted in source code by the current line number of that specific file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
ddae2dc85adeebb3128be23b2ffed8bfce3aa1d0
SourceFile
0
251
1360
2016-10-18T14:55:21Z
Polas
1
Created page with '== Syntax == _SOURCE_FILE == Semantics == Will be substituted in source code by the name of the current source code file, this is useful for debugging and error messages ''Si…'
wikitext
text/x-wiki
== Syntax ==
_SOURCE_FILE
== Semantics ==
Will be substituted in source code by the name of the current source code file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
795da35dff7714c5b22888b0e2511335684f94d1
Tutorial - RMA
0
252
1362
2016-10-19T11:10:58Z
Polas
1
Created page with '<metadesc>Tutorial describing RMA of data in Mesham</metadesc> '''Tutorial number eight''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]] == Int…'
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks but care must be taken for data consistency. Another approach is to use the
[[Category: Tutorials]]
4c199b589d379ddf6ae77f78f1c72d8bf764d519
1363
1362
2016-10-19T11:34:30Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls - [[Notifyall|notifyall]] to notify all processes, [[Waitany||waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
cd92aa70248e5cb167079c617db3e5755961b8ec
1364
1363
2016-10-19T11:35:02Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls [[Notifyall|notifyall]] to notify all processes, [[Waitany|waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
b589ba16c3993fd33e94273941ef252c394651a5
1365
1364
2016-10-19T11:42:34Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Eager RMA ==
var a:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[i]:=i;
};
sync a;
};
We saw this example previously, where process zero will most likely write out the value of 10 (variable ''i'' after the loop) to all elements of the array, this is because the remote write is issued based on the variable rather than the variable's value. You could instead place the ''sync a'' call directly after the assignment, or alternatively remove this call all together and append the [[Eageronesided|eageronesided]] type to the type chain of variable ''a'' which will ensure the RMA communication and completion is atomic.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls [[Notifyall|notifyall]] to notify all processes, [[Waitany|waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
709af751fe9d562668e7e649dccc755cc86da7d9
1366
1365
2016-10-19T11:43:39Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Eager RMA ==
var a:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[i]:=i;
};
sync a;
};
We saw this example previously, where process zero will most likely write out the value of 10 (variable ''i'' after the loop) to all elements of the array, this is because the remote write is issued based on the variable rather than the variable's value. You could instead place the ''sync a'' call directly after the assignment, or alternatively remove this call all together and append the [[Eageronesided|eageronesided]] type to the type chain of variable ''a'' which will ensure the RMA communication and completion is atomic.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls [[Notifyall|notifyall]] to notify all processes, [[Waitany|waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
11e4a054a1ca2d982d403dbd8ef8dacf8f14a0bb
1367
1366
2016-10-19T11:52:14Z
Polas
1
/* Notify and wait */
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Eager RMA ==
var a:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[i]:=i;
};
sync a;
};
We saw this example previously, where process zero will most likely write out the value of 10 (variable ''i'' after the loop) to all elements of the array, this is because the remote write is issued based on the variable rather than the variable's value. You could instead place the ''sync a'' call directly after the assignment, or alternatively remove this call all together and append the [[Eageronesided|eageronesided]] type to the type chain of variable ''a'' which will ensure the RMA communication and completion is atomic.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks) but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls [[Notifyall|notifyall]] to notify all processes, [[Waitany|waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
4cbdc5b518f8f6d4dae32c294f9edc8b78a1d3df
Notify
0
253
1369
2016-10-19T11:12:35Z
Polas
1
Created page with '== Overview == This notify(n) function will notify process ''n'', this target process can wait on or test for a notification * '''Pass:''' an [[Int]] representing the process I…'
wikitext
text/x-wiki
== Overview ==
This notify(n) function will notify process ''n'', this target process can wait on or test for a notification
* '''Pass:''' an [[Int]] representing the process ID to notify
* '''Returns:''' nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
3fb38973ecaed9ef88267a46bd1ac63d7f294b24
1370
1369
2016-10-19T11:12:48Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This notify(n) function will notify process ''n'', this target process can wait on or test for a notification
* '''Pass:''' an [[Int]] representing the process ID to notify
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
5181c774acf05f3db8ab990a0fb10954d6c6c205
1371
1370
2016-10-19T11:14:22Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This notify(n) function will notify process ''n'', this target process can wait on or test for a notification. This is non-blocking and will continue as soon as the function is called.
* '''Pass:''' an [[Int]] representing the process ID to notify
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
a9d7f6dd99ba0bfdb4e4c6544176156427de554b
1372
1371
2016-10-19T11:18:59Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Overview ==
This notify(n) function will notify process ''n'', this target process can wait on or test for a notification. This is non-blocking and will continue as soon as the function is called.
* '''Pass:''' an [[Int]] representing the process ID to notify
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
c3159b767a08110c9bce6359ec209bf4298a4f1d
Wait
0
254
1374
2016-10-19T11:13:44Z
Polas
1
Created page with '== Overview == This wait(n) function will wait for a notification from process ''n'' * '''Pass:''' an [[Int]] representing the process ID to wait for a notification from * '''R…'
wikitext
text/x-wiki
== Overview ==
This wait(n) function will wait for a notification from process ''n''
* '''Pass:''' an [[Int]] representing the process ID to wait for a notification from
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
proc 0 {
wait(0);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
fbb774e4d259fc56dc6f3596c09cfbc63b5b6c2e
1375
1374
2016-10-19T11:13:49Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This wait(n) function will wait for a notification from process ''n''
* '''Pass:''' an [[Int]] representing the process ID to wait for a notification from
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
proc 0 {
wait(0);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
97172899a64fab7bad0cd790cc4f9e7eb63eba66
1376
1375
2016-10-19T11:13:58Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This wait(n) function will block and wait for a notification from process ''n''
* '''Pass:''' an [[Int]] representing the process ID to wait for a notification from
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
proc 0 {
wait(0);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
5ae1719459035a6338f3a43fb7898ee7d7de1f7f
1377
1376
2016-10-19T11:18:44Z
Polas
1
wikitext
text/x-wiki
== Overview ==
This wait(n) function will block and wait for a notification from process ''n''
* '''Pass:''' an [[Int]] representing the process ID to wait for a notification from
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
wait(1);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
f793f1450c486b62a1eaf766d1642443ad5d6719
Notifyall
0
255
1379
2016-10-19T11:15:26Z
Polas
1
Created page with '== Overview == This notifyall() function will notify all other process, all these target process can wait on or test for a notification. This is non-blocking and will continue a…'
wikitext
text/x-wiki
== Overview ==
This notifyall() function will notify all other process, all these target process can wait on or test for a notification. This is non-blocking and will continue as soon as the function is called.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notifyall();
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
77e3323c114c9cf41cd69acab40366d45a1667d9
Waitany
0
256
1381
2016-10-19T11:16:37Z
Polas
1
Created page with '== Overview == This waitany() function will block and wait for a notification from any process. The id of that process is returned. * '''Pass:''' Nothing * '''Returns:''' The i…'
wikitext
text/x-wiki
== Overview ==
This waitany() function will block and wait for a notification from any process. The id of that process is returned.
* '''Pass:''' Nothing
* '''Returns:''' The id of the process that notified this process.
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(1);
};
proc 0 {
var p:=waitany();
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
32e04e4020ad8005335c59b12af3feb8eff2c9d0
1382
1381
2016-10-19T11:19:18Z
Polas
1
/* Example */
wikitext
text/x-wiki
== Overview ==
This waitany() function will block and wait for a notification from any process. The id of that process is returned.
* '''Pass:''' Nothing
* '''Returns:''' The id of the process that notified this process.
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
var p:=waitany();
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
31a56ec116dd43d387be75bc1bdc4e16e98d5c12
Test notification
0
257
1384
2016-10-19T11:18:35Z
Polas
1
Created page with '== Overview == This test_notification(n) function will test for a notification from process ''n'', if such a notification is available then this is received (i.e. one need not t…'
wikitext
text/x-wiki
== Overview ==
This test_notification(n) function will test for a notification from process ''n'', if such a notification is available then this is received (i.e. one need not then call [[Wait|wait]] or [[Waitall|waitall]].
* '''Pass:''' an [[Int]] representing the process ID to test for a notification from
* '''Returns:''' a [[Bool]] representing whether a notification was received or not
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
while (!test_notification(1)) { };
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
f9a6ffc9d6c42b608861c2c48b844a5a13bffab7
1385
1384
2016-10-19T11:20:12Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
This test_notification(n) function will test for a notification from process ''n'', if such a notification is available then this is received (i.e. one need not then call [[Wait|wait]] or [[Waitany|waitany]].
* '''Pass:''' an [[Int]] representing the process ID to test for a notification from
* '''Returns:''' a [[Bool]] representing whether a notification was received or not
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
while (!test_notification(1)) { };
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
72f1a22f7b9a4ba7b737df35707719bba46201a7
1386
1385
2016-10-19T11:20:21Z
Polas
1
/* Overview */
wikitext
text/x-wiki
== Overview ==
This test_notification(n) function will test for a notification from process ''n'', if such a notification is available then this is received (i.e. one need not then call [[Wait|wait]] or [[Waitany|waitany]].)
* '''Pass:''' an [[Int]] representing the process ID to test for a notification from
* '''Returns:''' a [[Bool]] representing whether a notification was received or not
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
while (!test_notification(1)) { };
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
aba0d271e2af22cbc2b194aa3ca7c02505263bde
Tutorial - Parallel Types
0
224
1243
1242
2016-10-19T11:43:00Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_RMA|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
== Eager one sided communication ==
Whilst normal one sided communications follow the Logic Of Global Synchrony (LOGS) model of shared memory communication and complete only when a synchronisation is issued, it is possible to override this default behaviour to complete communications at the point of issuing the assignment or access instead.
#include <io>
#include <string>
function void main() {
var i:Int::eageronesided::allocated[single[on[1]]];
proc 0 { i:=23; };
sync;
proc 1 { print(itostring(i)+"\n"); };
};
Compile and run this fragment, see that the value ''23'' has been set without any explicit synchronisation on variable ''i''. Now remove the eager bit of the [[Eageronesided|eager one sided type]] (or remove it altogether, remember [[onesided]] is the default communication) and see that, without a synchronisation the value is 0. You can add the [[Sync|sync]] keyword in after line 6 to complete the normal one sided call. We require a synchronisation between the proc calls here to ensure that process 1 does not complete before 0 which sets the value.
[[Category:Tutorials|Parallel Types]]
e0b1d8dc1513ef34df1dee7f85779b8bcb656f04
1244
1243
2016-10-19T11:43:22Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
== Eager one sided communication ==
Whilst normal one sided communications follow the Logic Of Global Synchrony (LOGS) model of shared memory communication and complete only when a synchronisation is issued, it is possible to override this default behaviour to complete communications at the point of issuing the assignment or access instead.
#include <io>
#include <string>
function void main() {
var i:Int::eageronesided::allocated[single[on[1]]];
proc 0 { i:=23; };
sync;
proc 1 { print(itostring(i)+"\n"); };
};
Compile and run this fragment, see that the value ''23'' has been set without any explicit synchronisation on variable ''i''. Now remove the eager bit of the [[Eageronesided|eager one sided type]] (or remove it altogether, remember [[onesided]] is the default communication) and see that, without a synchronisation the value is 0. You can add the [[Sync|sync]] keyword in after line 6 to complete the normal one sided call. We require a synchronisation between the proc calls here to ensure that process 1 does not complete before 0 which sets the value.
[[Category:Tutorials|Parallel Types]]
d77cb9304855c7a7af40589a701d4ffc96f995ec
Tutorial - Arrays
0
223
1235
1234
2016-10-19T11:43:31Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing collecting data together via arrays in Mesham</metadesc>
'''Tutorial number seven''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_RMA|next]]
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, is it just a list of the value ''8'', not what you expected? Well in this example the values copied across may be any number between 0 and 8 because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this communication does not guarantee to complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
71078da30e379159816c2afd63b2f66de4097383
Tutorial - Dynamic Parallelism
0
237
1322
1321
2016-10-19T11:43:59Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number nine''' - [[Tutorial_-_RMA|prev]] :: [[Tutorial_-_Advanced Types|next]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''--mesham_p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
[[Category: Tutorials|Dynamic Parallelism]]
87cef3b5a09feb946464b8866af7063b6092ab3d
Tutorial - Advanced Types
0
238
1327
1326
2016-10-19T11:44:18Z
Polas
1
wikitext
text/x-wiki
<metadesc>Tutorial describing advanced type features of Mesham</metadesc>
'''Tutorial number ten''' - [[Tutorial_-_Dynamic Parallelism|prev]]
== Introduction ==
Mesham has a number of advanced typing features over and above type chains and type coercion. In this tutorial we will look at some of this, how they might be used and how they can simplify your program code.
== Type Variables ==
The language has a concept of a type variable, which is a, compilation time, programmer defined type representing a more complex type chain. Let's have a look at this in more detail via an example
function void main() {
typevar typeA::=Int::allocated[multiple];
typevar typeB::=String::allocated[single[on[3]]];
var a:typeA;
var b:typeB;
};
In this example we create type type variables called ''typeA'' and ''typeB'' which represent different type chains. Then the actual program variables ''a'' and ''b'' are declared using these type variables. Notice how type assignment is using the ''::='' operator rather than normal program variable assignment which folows '':=''.
function void main() {
typevar typeA::=Int::allocated[multiple];
var a:typeA;
typeA::=String;
var b:typeA;
typeA::=typeA::const;
var c:typeA;
};
This example demonstrates assigning types and chains to existing type variables. At lines two and three we declare the type variable ''typeA'' and use it in the declaration of program variable ''a''. However, then on line five we modify the value of the type variable, ''typeA'' using the ''::='' operator to be a [[String]] instead. Then on line six we declare variable ''b'' using this type variable, which effectively sets the type to be a String. Line eight demonstrates how we can use the type variable in type chain modification and variable ''c'' is a constant [[String]].
'''Note:''' It is important to appreciate that type variables exist only during compilation, they do not exist at runtime and as such can not be used in conditional statements.
== Types of program variables ==
Mesham provides some additional keywords to help manage and reference the type of program variables, however it is imperative to remember that these are static only i.e. only exist during compilation.
=== Currenttype ===
Mesham has an inbuilt [[Currenttype|currenttype]] keyword which will result in the current type chain of a program variable.
a:currenttype a :: const;
a:a::const
In this code snippet both lines of code are identical, they will set the type of program variable ''a'' to be the current type chain combined with the [[Const|const]] type. Note that using a program variable in a type chain such as in the snippet above is a syntactic short cut for the current type (using the [[Currenttype|currenttype]] keyword) and either can be used.
=== Declaredtype ===
It can sometimes be useful to reference or even revert back to the declared type of a program variable later on in execution. To do this we supply the [[Declaredtype|declaredtype]] keyword.
function void main() {
var a:Int;
a:a::const;
a:declaredtype a;
a:=23;
};
This code will compile and work fine because, although we are coercing the type of ''a'' to be that of the [[Const|const]] type at line three, on line four we are reverting the type to be the declared type of the program variable. If you are unsure about why this is the case, then move the assignment around to see when the code will not compile with it.
== An example ==
Type variables are commonly used with [[Record|records]] and [[Referencerecord|referencerecords]]. In fact, the [[Complex|complex]] type obtained from the [[:Category:Maths_Functions|maths library]] is in fact a type variable.
#include <string>
#include <io>
typevar node;
node::=referencerecord[Int, "data", node, "next"];
function void main() {
var i;
var root:node;
root:=null;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=root;
root:=newnode;
};
while (root != null) {
print(itostring(root.data)+"\n");
root:=root.next;
};
};
This code will build up a linked list of numbers and then walk it, displaying each number as it goes. Whilst it is a relatively simple code, it illustrates how one might use type variables to improve the readability of their code. One important point to note is a current limitation in the Mesham parser and that is the fact that we are forced to declare the type variable ''node'' on line four and then separately assign to it at line five. The reason for this is that in this assignment we are referencing back to the ''node'' type variable in the [[Referencerecord|referencerecord]] type and as such it must exist.
== Limitations ==
There are some important limitations to note about the current use of types. Types currently only exist explicitly during compilation - what this means is that you can not do things such as passing them into functions or communicating them. Additionally, once allocation information (the [[Allocated|allocated]] type) and its subtypes have been set then you can not modify this, nor can you change the [[:Category:Element_Types|element type]].
[[Category: Tutorials|Advanced Types]]
1bce0537b1747d60db6fda126b75118db6183104
Sync
0
41
236
235
2016-10-19T13:10:01Z
Polas
1
wikitext
text/x-wiki
== Syntax ==
sync name;
Where the optional ''name'' is a variable.
== Semantics ==
Will complete asynchronous communications and can act as a barrier involving all processes. This keyword is linked with default shared memory (RMA) communication and specific types such as the async communication type. If the programmer specifies an explicit variable name then this synchronisation will just occur for that variable, completing all outstanding communications for that specific variable only (without any global barrier.) In the absence of a variable then synchronisation (completing outstanding communications) shall occur for all variables followed by a global barrier. When asynchronous communication (via default shared memory RMA or explicit types) is involved, the value of the variables can only be guaranteed once a corresponding synchronisation (either with that variable or global, without any variable) has completed.
''Since: Version 0.5''
[[Category:Parallel]]
18c7fcbe7dd4a8aae380e11d709d77be57bd4ba8
Download 1.0
0
229
1290
1289
2016-10-19T14:45:49Z
Polas
1
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_411|released=August 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_411 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Experimental thread based runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtlthreads64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtlthreads32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 3) and a C compiler. We suggest '''MPICH''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
If you are using the experimental thread based runtime library then MPI is not required, the thread based RTL uses pthreads which is usually already installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
e37a1b609f623fdbb19d2101635d2fe2c3db8f1e
Main Page
0
1
1
2019-04-15T14:52:14Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
10
1
2019-04-15T15:44:23Z
Polas
1
1 revision imported
wikitext
text/x-wiki
<strong>MediaWiki has been installed.</strong>
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
5702e4d5fd9173246331a889294caf01a3ad3706
MediaWiki:Sitenotice
8
2
4
3
2019-04-15T15:44:23Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
MediaWiki:Monobook.css
8
3
8
7
2019-04-15T15:44:23Z
Polas
1
3 revisions imported
css
text/css
/* CSS placed here will affect users of the Monobook skin */
#ca-edit { display: none; }
d1e56f596937430f27e759fe45a4c0e8dabde0f9
MediaWiki:Mainpage
8
4
12
11
2019-04-15T15:44:23Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Mesham
9deaf65c813c450f1cd04c627b6f6178c9d18fcc
Mesham
0
5
26
25
2019-04-15T15:44:23Z
Polas
1
13 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
<div id="Mesham"></div> __NOTOC__ __NOEDITSECTION__
<!-- Welcome box -->
{{Welcome}}
{{Help Us}}
<!-- Table stuff -->
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 66%; vertical-align: top;" |
<!-- First column -->
{{Box|subject= News|title= Latest developments}}
{| style="width: 100%; margin: 0; padding: 0; border: 0; border-collapse: collapse;"
| style="padding: 0; width: 50%; vertical-align: top;" |
{{Box|subject= Documentation|title= Documentation}}
| style="padding: 0 0 0 10px; width: 50%; vertical-align: top;" |
{{Box|subject= Examples|title= In code}}
|}
| style="padding: 0 0 0 10px; width: 33%; vertical-align: top;" |
<!-- Third column -->
{{Box|subject= Introduction|title= Quick start}}
{{Box|subject= Downloads|title= Downloads}}
|}
54cf603ea2f185ff2ceb70e4d17b6b74120b70fb
Template:Box
10
6
31
30
2019-04-15T15:44:23Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 0px solid #CEDFF2; padding:0.6em 0.8em;">
<h2 style="margin:0;background-color:#CEDFF2;font-size:120%;font-weight:bold;border:1px solid #A3B0BF;text-align:left;color:#000;padding:0.2em 0.4em;">{{{title}}}</h2>
{{{{{subject}}}}}
</div>
0c34a4fcc1c10a40fb3864504b45326b1b8e02d5
Template:Help Us
10
7
35
34
2019-04-15T15:44:24Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
<!--<div style="margin: 0 0 15px 0; padding: 0.2em; background-color: #EFEFFF; color: #000000; border: 1px solid #9F9FFF; text-align: center;">
'''Mesham always needs your help! See the [[Wish List]] for more information.'''
</div>-->
95023eb69f0fb5c9b3b39fe0bea0b51a2c337ec8
Template:Welcome
10
8
41
40
2019-04-15T15:44:24Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
<div style="margin: 0 0 15px 0; padding: 1px; border: 1px solid #CCCCCC;">
{| style="width: 100%; margin: 0; padding: 0; border: 0; background-color: #FCFCFC; color: #000000; border-collapse: collapse;"
| align="center" style="vertical-align: top; white-space:nowrap;" |
<div class="plainlinks" style="width: 30em; text-align: center; padding: 0.7em 0;">
<div style="font-size: 220%;">Welcome to [http://www.mesham.com/ Mesham]</div>
<div style="font-size: 90%; margin-top: 0.7em; line-height: 130%;">Mesham is a type oriented programming language allowing the writing of high <br>performance parallel codes which are efficient yet simple to write and maintain.</div>
</div>
|}
</div>
1831875e9d0ffec6e245044ea9f980ba8d9a3c5c
Template:Stylenav
10
9
43
42
2019-04-15T15:44:24Z
Polas
1
1 revision imported
wikitext
text/x-wiki
<div style="margin: 0 0 10px 0; padding: 0 1em 0.7em 1em; background-color: #F5FAFF; color: #000000; border: 1px solid #CEDFF2; padding:0.2em 0.2em; text-align: center;">
'''Display: [[Main Page|Short view]] - [[Expanded Main Page|Expanded view]]'''
</div>
3ae3d5e6e4f10637c2693da361aa93b4b26a1bf9
Template:Introduction
10
10
51
50
2019-04-15T15:44:24Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
*[[What_is_Mesham|What is Mesham?]]
*[[Parallel_Computing|Parallel Computing]]
**[[Communication]]
**[[Computation]]
*[[Type Oriented Programming Concept|Type Oriented Programming]]
*[[:Category:Tutorials|Mesham Tutorials]]
*[[:Category:Example Codes|Example Codes]]
2ddc26f38cee1d46cc06b7a785c0e5fbe9db8bc7
Template:Downloads
10
11
66
65
2019-04-15T15:44:24Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
*[[Specification|Language specification]]
<hr>
*[[Download_1.0|Complete compiler (''version 1.0.0_411'')]]
*[[Download_rtl_1.0|Runtime library 1.0.03]]
*[[Download_libgc|Garbage collector 7.2]]
<hr>
*[[Arjuna|Legacy versions]]
0a82304e9ab76590a50013e401fb38aaaf342dd3
Template:Examples
10
12
74
73
2019-04-15T15:44:24Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
*Selected tutorials
**[[Tutorial - Hello world|Hello world]]
**[[Tutorial - Simple Types|Simple Types]]
**[[Tutorial - Functions|Functions]]
**[[Tutorial - Parallel Constructs|Parallel Constructs]]
**[[:Category:Tutorials|'''All tutorials''']]
*Selected codes
**[[Mandelbrot]]
**[[NAS-IS_Benchmark|NASA IS benchmark]]
**[[Image_processing|Image Processing]]
**[[Dartboard_PI|Dartboard method find PI]]
**[[:Category:Example Codes|'''All codes''']]
7c176074c644bfa475c4f660e42e3b707815293c
Template:In Development
10
13
79
78
2019-04-15T15:44:25Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
*Mesham
**[[General Additions]]
**[[Extendable Types]]
*[[New Compiler]]
4d25cd2ec6e8a87ac0b007ac4b25dc6f84ecafa5
Template:Documentation
10
14
90
89
2019-04-15T15:44:25Z
Polas
1
10 revisions imported
wikitext
text/x-wiki
*[[Introduction]]
**[[The Compiler]]
**[[The Idea Behind Types]]
*[[:Category:Core Mesham|Core Mesham]]
**[[:Category:Types|Types]]
**[[:Category:Sequential|Sequential]]
**[[:Category:Parallel|Parallel]]
**[[Functions]]
**[[:Category:Preprocessor|Preprocessor]]
*[[:Category:Type Library|Type Library]]
**[[:Category:Element Types|Element Types]]
**[[:Category:Compound Types|Compound Types]]
*[[:Category:Function Library|Function Library]]
3483b754ff50c48b8c86563d33f6838faa7a1841
What is Mesham
0
15
97
96
2019-04-15T15:44:25Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Introduction==
As technical challenges increase, the notion of using many computers to solve tasks is a very attractive one and has been the focus of much research. However, as the hardware has matured, a weakness in this field has been exposed - It is actually very difficult to write parallel programs with any complexity, and if the programmer is not careful they can end up with an abomination to maintain. Up until this point, simplicity to program and efficiency have been tradeoffs, with the most common parallel codes being written in low level languages.
==Mesham==
'''Mesham''' is a programming language designed to simplify High Performance Computing (HPC) yet result in highly efficient executables. This is achieved mainly via the type system, the language allowing for programmers to provide extra typing information not only allows the compiler to perform far more optimisation than traditionally, but it also enables conceptually simple programs to be written. Code written in Mesham is relatively simple, efficient, portable and safe.
==Type Oriented Programming==
In ''type oriented programming'' the majority of the complexity of the language is taken away and put into the type system. Whilst abstractions such as functional programming and object orientation have become popular and widespread, use of the type system in this way is completely novel. Placing the complexity of the language into the type system allows for a simple language yet yields high performance due to the rich amount of information readily available to the compiler.
==Why Mesham?==
'''Mesham''' will be of interest to many different people:
*Scientists - With Mesham you can write simple yet highly efficient parallel HPC code which can easily run on a cluster of machines
*HPC Programmers - Mesham can be used in conjunction with Grid computing, with the program being run over a hetrogenus resource
*Normal Computer Users - Programs written in Mesham run seamlessly on SMPs, as a programmer you can take advantage of these multiple processors for common tasks
46c6e8fe76b61074ffc5984bf4554b8f80832120
Mesham parallel programming language:Copyrights
0
16
100
99
2019-04-15T15:44:25Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
The intelectual property of the Mesham programming language, associated compilers, runtime language and documentation including example codes is owned by Nick Brown. It may be used and reproduced as per the creative commons licence terms but all ownership remains with the author.
The Lib GC compiler is owned by Hans Boehm and released under [http://www.hpl.hp.com/personal/Hans_Boehm/gc/license.txt licence]
8f88b8aa523b5a02d288fee7b2456c6635562d88
Introduction
0
17
103
102
2019-04-15T15:44:25Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==Why==
Mesham was developed as a parallel programming language with a number of concepts in mind. From reviewing existing HPC languages it is obvious that programmers place a great deal of importance on both performance and resource usage. Due to these constraining factors, HPC code is often very complicated, laced with little efficiency tricks, which become difficult to maintain as time goes on. It is often the case that, existing HPC code (often written in C with a communications library) has reached a level of complexity that efficiency takes a hit.
==Advantages of Abstraction==
By abstracting the programmer from the low level details there are a number of advantages.
*Easier to understand code
*Quicker production time
*Portability easier to achieve
*Changes, such as data structure changes, are easier to make
*The rich parallel structure provides the compiler with lots of optimisation clues
==Important Features==
In order to produce a language which is usable by the current HPC programmers there are a number of features which we believe are critical to the language success.
*Simpler to code in
*Efficient Result
*Transparent Translation Process
*Portable
*Safe
*Expressive
==Where We Are==
This documentation, and the language, is very much work in progress. The documentation aims to both illustrate to a potential programmer the benefits of our language and approach and also to act as a reference for those using the language. There is much important development to be done on the language and tools in order to develop what has been created thus far
ba3f4f909927f49e51f081e926c2ccb27a2c6972
The Idea Behind Types
0
18
107
106
2019-04-15T15:44:26Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
==A Type==
The concept of a type will be familar to many programmers. A large subset of languages follow the syntax [Type] [Variablename], such as "int a" or "float b", to allow the programmer to declare the variable. Such a statement affects both the compiler and runtime semantics - the compiler can perform analysis and optimisation (such as type checking) and in runtime the variable has a specific size and format. When we consider these sorts of languages, it can be thought of that the programmer provides information, to the compiler, via the type. However, there is only so much that one single type can reveal, and so languages often include numerous keywords in order to allow for the programmer to specify additional information. Taking C as an example, in order to declare a variable "m" to be a character in read only memory the programmer writes "const char m". In order to extend the language, and allow for extra variable attributes (such as where a variable is located in the parallel programming context) then new keywords would need to be introduced, which is less than ideal.
==Type Oriented Programming==
The approach adopted by Mesham is to allow the programmer to encode all variable information via the type system, by combining different types together to form a supertype (type chain.) In our language, "const char m" becomes "var m: Char :: const[]", where var m declares the variable, the operator ":" specifies the type and the operator "::" combines two types together. In this case, the supertype is that formed by combining the type Char with the type const. It should be noted that some type cohercions, such as "Int :: Char" are meaningless and so rules exist within each type to govern which combinations are allowed.
Type presidence is from right to left - in the example "Char :: const[]", it can be thought of that the read only attributes of const override the default read/write attributes of Char. Abstractly, the programmer can consider the supertype (type chain) formed to be a little bit like a linked list. For instance the supertype created by "A::B::C::D::E" is illustrated below.
<center>[[File:types.jpg|Type Chain Illustration]]</center>
==Advantages==
Using this approach many different attributes can be associated with a variable, the fact that types are loosely coupled means that the language designers can add attributes (types) with few problems, and by only changing the type of a variable the semantics can change considerably. Another advantage is that the rich information provided by the programmer allows for many optimisations to be performed during compilation that using a lower level language might not be obvious to the compiler.
==Technically==
On a more technical note, the type system implements a number of services. These are called by the core of the compiler and if the specific type does not honour that service, then the call is passed onto the next in the chain - until all are exhausted. For instance, using the types "A::B::C::D::E", if service "Q1" was called, then type "E" would be asked first, if it did not honour the service, "Q1" would be passed to type "D" - if that type did not honour it then it would be passed to type "C" and so forth.
542e7ec8569cd648c24cbb57da3a3b53d0081689
File:Types.jpg
6
19
109
108
2019-04-15T15:44:26Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Type Chain formed when combining types A::B::C::D::E
f1c13468bdd6fb5b43f265520ee5b5f847894873
Category:Core Mesham
14
20
113
112
2019-04-15T15:44:26Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Sequential
14
21
115
114
2019-04-15T15:44:26Z
Polas
1
1 revision imported
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Category:Parallel
14
22
117
116
2019-04-15T15:44:26Z
Polas
1
1 revision imported
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Category:Preprocessor
14
23
119
118
2019-04-15T15:44:26Z
Polas
1
1 revision imported
wikitext
text/x-wiki
[[Category:Core Mesham]]
515e074b32e208d89a0a23a3f9b2b8b9a110dc94
Declaration
0
24
133
132
2019-04-15T15:44:27Z
Polas
1
13 revisions imported
wikitext
text/x-wiki
== Syntax ==
All variables must be declared before they are used. In Mesham one may declare a variable via its value or explicit type.
var name;<br>
var name:=[Value];<br>
var name:[Type];<br>
Where ''name'' is the name of the variable being declared.
== Semantics ==
The environment will map the identifier to storage location and that variable is now usable. In the case of a value being specified then the compiler will infer the type via type inference either here or when the first assignment takes place.<br><br>
''Note:'' It is not possible to declare a variable with the value ''null'' as this is a special, no value, placer and as such has no type.
== Examples ==
function void main() {
var a;
var b:=99;
a:="hello";
};
In the code example above, the variable ''a'' is declared, without any further information the type is infered by its first use (to hold type String.) Variable ''b'' is declared with value 99, an integer and as such the type is infered to be both Int and allocated on multiple processes.
function void main() {
var t:Char;
var z:Char :: allocated[single[on[2]]];
};
Variable ''t'' is declared to be a character, without further type information it is also assumed to be on all processes (by default the type Char is allocated to all processes.) Lastly, the variable ''z'' is declared to be of type character, but is allocated only on a single process (process 2.)
''Since: Version 0.41b''
[[Category:sequential]]
bdb646e3f7d4fe641c6e25916463c9fc4a39c32e
Variable Declaration
0
25
135
134
2019-04-15T15:44:27Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Declaration]]
3b8c12aa0b78726af77da60c9e428dc5b3648955
Assignment
0
26
141
140
2019-04-15T15:44:27Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
==Syntax==
In order to assign a value to a variable then the programmer will need to use variable assignment.
[lvalue]:=[rvalue];
Where ''lvalue'' is a memory reference and ''rvalue'' a memory reference or expression
== Semantics==
Will assign a ''lvalue'' to ''rvalue''.
== Examples==
function void main() {
var i:=4;
var j:=i;
};
In this example the variable ''i'' will be declared and set to value 4, and the variable ''j'' also declared and set to the value of ''i'' (4.) Via type inference the types of both variables will be that of ''Int''
''Since: Version 0.41b''
[[Category:sequential]]
93d7df635751b7943577852f9c4cdaf68b8a2205
For
0
27
148
147
2019-04-15T15:44:27Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
for i from a to b <br>
{<br>
forbody<br>
{
== Semantics ==
The for loop can be thought of as syntactic sugar for a while loop, incrementing the variable after each pass and will loop from ''a'' to ''b''
== Example ==
#include <io>
#include <string>
function void main() {
var i;
for i from 0 to 9 {
print(itostring(i)+"\n");
};
};
This code example will loop from 0 to 9 (10 iterations) and display the value of ''i'' on each pass.
''Since: Version 0.41b''
[[Category:sequential]]
512654e7fa671e112340ae465d44e201733663b3
While
0
28
153
152
2019-04-15T15:44:27Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
==Syntax==
while (condition) whilebody;
==Semantics==
Will loop whilst the condition holds.
== Examples ==
function void main() {
var a:=10;
while (a > 0) {
a--;
};
};
Will loop, each time decreasing the value of variable ''a'' by 1 until the value is too small (0).
''Since: Version 0.41b''
[[Category:Sequential]]
b94b3ba77562d71ebe482e5599f418ac248b9bbe
Break
0
29
158
157
2019-04-15T15:44:27Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
break;
== Semantics ==
Will break out of the current enclosing loop body
== Example ==
function void main() {
while (true) { break; };
};
Only one iteration of the loop will complete, where it will break out of the body.
''Since: Version 0.41b''
[[Category:sequential]]
408e81bc84db59b6551ab1ff27267244cacc1ee2
Try
0
30
163
162
2019-04-15T15:44:27Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
try<br>
{<br>
try body<br>
} catch (error string) { <br>
error handing code<br>
}<br>
== Semantics ==
Will execute the code in the try body and handle any errors. This is very important in parallel computing as it allows the programmer to easily deal with any communication errors that may occur. Exception handling is dynamic in Mesham and the last appropriate catch block will be entered into depending on program flow.
== Error Strings ==
There are a number of error strings build into Mesham, additional ones can be specified by the programmer.
*Array Bounds - Accessing an array outside its bounds
*Divide by zero - Divide by zero error
*Memory Out - Memory allocation failure
*root Illegal - root process in communication
*rank Illegal - rank in communication
*buffer Illegal - buffer in communication
*count - Count wrong in communication
*type - Communication type error
*comm - Communication communicator error
*truncate - Truncation error in communication
*Group - Illegal group in communication
*op - Illegal operation for communication
*arg - Arguments used for communication incorrect
*oscli - Error returned by operating system when performing a system call
== Example ==
#include <io>
#include <string>
function void main() {
try {
var t:array[Int,10];
print(itostring(a[12]));
} catch ("Array Bounds") {
print("No Such Index\n");
};
};
In this example the programmer is trying to access element 12 of array ''a''. If this does not exist, then instead of that element being displayed an error message is put on the screen.
''Since: Version 0.5''
[[Category:sequential]]
dc873c1361d5c5abb2e9527611677cbe186602a4
Throw
0
31
169
168
2019-04-15T15:44:28Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
throw errorstring;
== Semantics ==
Will throw the error string, and either cause termination of the program or, if caught by a try catch block, will be dealt with.
== Example ==
#include <io>
function void main() {
try {
throw "an error"
} catch "an error" {
print("Error occurred!\n");
};
};
In this example, a programmer defined error ''an error'' is thrown and caught.
''Since: Version 0.5''
[[Category:sequential]]
7d9f05f570df25685680b1deba0b779c485cb5a2
If
0
32
174
173
2019-04-15T15:44:28Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
if (condition)<br>
{<br>
then body<br>
} else {<br>
else body<br>
};<br>
== Semantics ==
Will evaluate the condition and, if true will execute the code in the ''then body.'' Optionally, if the condition is false then the code in the ''else body'' will be executed if this has been supplied by the programmer.
== Example ==
#include <io>
function void main() {
if (a==b) {
print("Equal");
};
};
In this code example two variables ''a'' and ''b'' are tested for equality. If equal then the message will be displayed. As no else section has been specified then no specific behaviour will be adopted if they are unequal.
''Since: Version 0.41b''
[[Category:sequential]]
bc1ec14c9916f451533963b4892460eaa5bd552e
Conditional
0
33
176
175
2019-04-15T15:44:28Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[If]]
[[Category:sequential]]
258cc19502efae8a52206b699a9b0541ac6fc6ca
Sequential Composition
0
34
181
180
2019-04-15T15:44:28Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
body ; body
== Semantics ==
Will execute the code before the sequential composition, '';'', and then (if this terminates) will execute the code after the sequential composition.<br><br>
''Note:'' Unlike many imperative languages, all blocks must be terminated by a form of composition (sequential or parallel.)
== Examples ==
function void main() {
var a:=12 ; a:=99
};
In the above example variable ''a'' is declared to be equal to 12, after this the variable is then modified to hold the value of 99.
function void main() {
function1() ; function2()
};
In the second example ''function1'' will execute and then after (if it terminates) the function ''function2'' will be called.
''Since: Version 0.41b''
[[category:sequential]]
f037be84f6a43c186db4b2777331bc1b275856e0
How To Edit
0
35
184
183
2019-04-15T15:44:28Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
==Before you start==
In order to edit the Mesham wiki, you will need to log in. Before you can log in, you must first create an account. This is a simple process – just go to the [[Special:Userlogin|Login page]] and enter the relevant details. Having created an account and logged in, you can then make whatever changes you please throughout most of the wiki. (There are a few sections where only trusted users with greater privileges can make changes.)
Be warned that after a certain amount of inactivity, you will automatically be logged out again. The cutoff is approximately an hour. If you are making anything more than trivial changes, it is better to write them in an external text editor, then cut-and-paste them into place. This reduces the risk that you will lose your work.
== How to edit a Wiki ==
''NB: This is meant as a getting started guide to wiki editing. For a complete list of commands, visit http://en.wikipedia.org/wiki/Help:Editing''
*Every page will have at least one blue '''Edit''' link in a tab at the top of the page (with the exception of certain locked pages).
*Clicking this button when logged in takes you to the editing page
Once at the Editing page, you'll need to know about the format of Wikis.
Generally, everything is designed to be straightforward. riscos.info uses the same [http://meta.wikimedia.org/wiki/MediaWiki MediaWiki] software as [http://www.wikipedia.org/ Wikipedia], so more information can be found reading the [http://meta.wikimedia.org/wiki/Help:Contents MediaWiki Handbook].
=== Formatting ===
Normal text only needs to be typed.
A single new line
doesn't create
a break.
An empty line starts a new paragraph.
*Lines starting with * create lists. Multiple *s nest the list
*Lines starting with # create a numbered list. Using ## and ### will add numbered subsections split with periods.
*Apostrophes can be used to add emphasis. Use the same number of apostrophes to turn the emphasis off again at the end of the section.
**Two apostrophes will put text in italics: <nowiki>''some text''</nowiki> – ''some text''
**Three apostrophes will put text in bold: <nowiki>'''some more text'''</nowiki> – '''some more text'''
**Five apostrophes will put text in bold italics: <nowiki>'''''and some more'''''</nowiki> – '''''and some more'''''
*Sections can be marked by putting the = symbol around the heading. The more = signs are used, the lower-level the heading produced:
**<nowiki>==Main Heading==</nowiki>
**<nowiki>===Sub-heading===</nowiki>
**<nowiki>====Smaller sub-heading====</nowiki>
*Some standard HTML codes can also be used: <nowiki><b></nowiki><b>bold</b><nowiki></b></nowiki> <nowiki><font color="red"></nowiki><font color="red">red</font><nowiki></font></nowiki> Please use these sparingly. However, if you want some text to be in single quotes and italics, <nowiki><i>'quotes and italics'</i></nowiki> produces <i>'quotes and italics'</i> while three quotes would produce <nowiki>'''bold instead'''</nowiki> – '''bold instead'''.
*HTML glyphs – &pound; £, &OElig; Œ, &deg; °, &pi; π etc. may also be used. (The <nowiki><nowiki> and </nowiki></nowiki> tags do not affect these.)
**The ampersand (&) '''must''' be written with the &amp; glyph.
*To override the automatic wiki reformating, surround the text that you do ''not'' want formatted with the <nowiki><nowiki></nowiki> and <nowiki></nowiki></nowiki> tags.
*A line across the page can be produced with four - signs on a blank line:
<nowiki>----</nowiki>
----
*Entries may be signed and dated (recommended for comments on talk pages) with four tildes: <nowiki>~~~~</nowiki> [[User:Simon Smith|Simon Smith]] 02:05, 25 May 2007 (BST)
=== Linking and adding pictures ===
To link to another article within the wiki, eg: [[RISC OS]], type double brackets around the page you want to link to, as follows: <nowiki>[[Page name here]]</nowiki>. If the page you refer to already exists, <nowiki>[[Page name here]]</nowiki> will appear as a blue clickable link. Otherwise, it will appear as a red 'non-existent link', and following it will allow you to create the associated page.
To add a picture, use a link of the form <nowiki>[[Image:image name here|alternative text here]]</nowiki>. For example, <nowiki>[[Image:zap34x41.png|Zap icon]]</nowiki> gives the Zap application icon: [[Image:zap34x41.png|Zap icon]]
There is a summary [[Special:Imagelist|list of uploaded files]] available, and a [[Special:Newimages|gallery of new image files]].
To link to an external URL, type the URL directly, including the leading <nowiki>'http://'</nowiki>, as follows: http://riscos.com. To change how a link to an external URL appears, type ''single'' brackets around the URL, and separate the URL from the alternative text with a space. For example, <nowiki>[http://riscos.com Text to appear]</nowiki> gives [http://riscos.com Text to appear]. As an anti-spamming measure, you will have to enter a CAPTCHA code whenever you add a new link to an external page. The following link gives [[Special:Captcha/help|further information on CAPTCHAs]].
When providing a link, try to make the clickable part self-descriptive. For example, 'The following link gives [[Special:Captcha/help|further information on CAPTCHAs]]' is preferable to 'For further information on CAPTCHAs, click [[Special:Captcha/help|here]]'. A link that says 'click here' is only understandable in context, and users may not be able to tell where the link will send them until they click on it.
If you link to a page that doesn't exist, following the link will send you to a blank page template, allowing you to edit and thus create the new page: [[A page that doesn't exist]].
If you wanted to link to another Wiki article ''X'', but display text ''Y'', use a 'piped link'. Type the name of the page first, then a pipe symbol, then the alternative text. For example, <nowiki>[[RISC OS|Front page]]</nowiki> gives [[RISC OS|Front page]].
=== General Advice ===
The [[RISC OS|front page]] has several categories listed on it. While this list can grow, if your article can fit in one of these categories, then go to the category page in question and add a link to it.
When creating a new page, make use of the ''Preview'' button to avoid filling up the change log with lots of revisions to your new article and always include some information in the 'Summary' box to help others see what's happened in the change log.
If you think a page should exist, but you don't have time to create it, link to it anyway. People are far more likely to fill in blanks if they can just follow a link than if they have to edit links all over the place.
Above all, keep it factual, professional and clean. If you don't, you are liable to be banned from further contribution, and someone will fix your errors anyway! As the disclaimer says: ''''If you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here.'''' [http://www.wikipedia.org Wikipedia] is proof that the idea works, and works well.
=== Brief Style Guide ===
This subsection gives a brief summary of the style conventions suggested for use throughout the wiki.
* Terms which are particularly important to an entry should have links provided. Terms of only minor relevance should not be linked. It is only necessary to provide a link the first time a related term is used, not every time it appears. Additional links may still be added in longer entries and in any other cases where readers are likely to find it helpful.
* Write out unusual abbreviations in full the first time they are used within each article, and then give the abbreviation within parentheses. (For example: 'Programmer's Reference Manual (PRM)'.) Thereafter, use the abbreviation without further comment. In 'general' articles, the threshold for what is considered an unusual abbreviation will be lower than in 'technical' articles.
* When linking to a compound term include the full term inside the link (rather than part of the term inside the link, part outside) and if necessary use the pipe ('|') symbol to provide more suitable alternative text. For example, use "''[[Martin Wuerthner|Martin Wuerthner's]] applications include …''" rather than "''[[Martin Wuerthner]]'s applications include …''"
* Try to ensure that every link (briefly) describes its contents. Avoid sentences that say, 'To find out about XYZ, [[A page that doesn't exist|click here]]'; instead use sentences of the form, 'Follow this link [[A page that doesn't exist|to find out about XYZ]]'.
* As far as possible use the Wiki codes for bold, italic, lists, etc. rather than inserting HTML markup.
* Use single quotes in preference to double quotes except when quoting a person's actual words.
* Write single-digit numbers in words, numbers of 13 or more as numbers. The numbers 10-12 represent a grey area where either convention may be used as seems appropriate. The best guide it to stay consistent within a particular section of a document. Number ranges and numbers with decimal fractions should always be written as numbers.
* Use HTML glyphs for specialist symbols. Do not forget the trailing semicolon – while most browsers will still display the glyph even if the semicolon is missing, this is not guaranteed to work reliably. Of the sample glyphs given, the ampersand, quotes, and the less than and greater than symbols are the least critical, because the Wiki software will usually automatically alter them to the correct forms. A Google search for [http://www.google.co.uk/search?hl=en&ie=ISO-8859-1&q=HTML+glyphs&btnG=Google+Search&meta= HTML glyphs] gives several useful summaries. Some commonly-used glyphs are given below:
**ampersand : & : &amp;
**dashes : — – : &mdash; &ndash;
**double quotes : " : &quot;
**ellipsis : … : &hellip;
**hard space : : &nbsp;
**less than, greater than : < > : &lt; &gt;
**pound : £ : &pound;
**superscripts : ² ³ : &sup2; &sup3;
* Avoid contractions (it's, doesn't) and exclamations.
* When giving a list of items, provide the entries in ascending alphabetical order unless there is some other more compelling sequence.
* When leaving comments on discussion pages, sign them with four tildes – <nowiki>~~~~</nowiki>. This adds your user name and the time and date.
* In general, the desired tone for the RISC OS wiki is similar to that of a RISC OS magazine. However, highly technical articles should be written to have the same tone and style as the entries in the [[RISC OS Documentation|RISC OS Programmer's Reference Manuals]].
=== Templates ===
Templates allow information to be displayed in the same format on different, related, pages (such as the info box on [http://en.wikipedia.org/wiki/RISC_OS this Wikipedia page]), or to link together related articles (such as the box on [[QEMU|this page]]).
See this [http://home.comcast.net/~gerisch/MediaWikiTemplates.html Getting Started HOWTO], or try editing a [http://en.wikipedia.org/wiki/Wikipedia:Template_messages Wikipedia template] to see the source for an existing example.
The main templates in use within the RISCOS.info Wiki are the [[Template:Application|Application]] and [[Template:Applicationbox|Applicationbox]] templates. Instructions on how to use them are given on their associated talk pages. A couple of Infobox templates have also been set up, but these do not require per-use customisation.
* [[Template_talk:Application|How to use the Application template]]
* [[Template_talk:Applicationbox|How to use the Applicationbox template]]
* [http://www.riscos.info/index.php?title=Special%3AAllpages&from=&namespace=10 List of current templates]
== Talk Pages ==
Every wiki page has a [http://www.mediawiki.org/wiki/Help:Talk_pages Talk page] associated with it. It can be reached through the ''discussion'' tab at the the top of the page.
The Talk page is useful for remarks, questions or discussions about the main page. By keeping these on the Talk page, the main page can focus on factual information.
Please observe the following conventions when writing on the Talk page (for a full description see the [http://www.mediawiki.org/wiki/Help:Talk_pages MediaWiki page on Talk pages]):
*Always sign your name after your comments using four tildes '<tt><nowiki>~~~~</nowiki></tt>'. This will expand to your name and a date stamp. Preferably preceed this signature with two dashes and a space: '<tt><nowiki>-- ~~~~</nowiki></tt>'.
*Start a new subject with a <tt><nowiki>== Level 2 Heading ==</nowiki></tt> at the bottom of the page.
*Indent replies with a colon ('<tt>:</tt>') at the beginning of the line. Use multiple colons for deeper indents. Keep your text on one line in the source for this to work. If you really must have more than one paragraph, start that paragraph with a blank line and a new set of colons.
*Unlike in the normal wiki pages, normally you should not edit text written by others.
== Moderating Others' Work ==
If you spot a mistake in someone else's work, correct it, but make a note in the 'Summary' box stating the reason for the change, eg: ''Fixed speeling mistooks''.
If you feel you can add useful information to an existing page, then add it. If you feel something should be removed, remove it, but state why in the 'Summary' box. If it's a point of contention, use the article [[#Talk Pages|talk page]] to start a talk about it.
Before removing or making significant changes to someone else's contribution, consider the [http://meta.wikimedia.org/wiki/Help:Reverting#When_to_revert guidance on "reverting"] from wikimedia.
== Reverting spam ==
Administrators can make use of a [http://en.wikipedia.org/wiki/Wikipedia:Rollback_feature fast rollback facility]. Bring up the article, then click on the History tab. Select the version you wish to rollback to in the first column, and the current version in the second. Click 'compare selected versions'. In the second column will be a 'Rollback' link: click this to rollback. It will also place a comment in the log denoting the rollback.
Reverting when not an administrator is slightly more complicated - see [http://en.wikipedia.org/wiki/Help:Reverting#How_to_revert instructions how to revert].
8fccbc1e46370256f04f3f7e933ccf039053d740
Help:Contents
12
36
186
185
2019-04-15T15:44:28Z
Polas
1
1 revision imported
wikitext
text/x-wiki
A few useful links
# [[How_To_Edit|How To Edit]]
f89b5cd5a3eb031ece0ce2bd9d7fdd071d57f4eb
Download 0.41 beta
0
37
201
200
2019-04-15T15:44:29Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Mesham 0.41b|author=[[User:polas|Nick Brown]]|desc=The first release from the Arjuna compiler line and the last version to work on Windows. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.41b|released=September 2008}}
''Please Note: This version of Mesham is deprecated, the documentation and examples on this website are no longer compatible with this version.''
== Version 0.41 ==
Available in this package is version 0.41 (beta). This version of the language has the majority of current functionality, although there are some aspects unavailable which means that the Gadget-2 port is not supported by this version (it requires 0.50.) Having said that, version 0.41 is the only one which currently explicitly supports Windows. Most likely explicit support for Windows will be dropped in the 0.50 release, although advanced users should still be able to get it running on that OS.
== Download ==
You can download [http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b) here] which is a zip file of approximately 1MB and the download supports both POSIX systems and Windows. Full instructions are included on installation for your specific system and installation instructions are also on this page.
== Installation on POSIX Systems ==
*Install Java RTE from java.sun.com
*Make sure you have a C compiler installed i.e. gcc
*Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
*The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
*Now type make all
*If you have root access, login as root and type make install
*Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer! Now read the readme file for information on how to run the compiler
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Installation on Windows Systems ==
The best way is to install a POSIX based system and follow those instructions. No, seriously, many of the tools and support for parallelism really is designed for Unix based OSes and as such you will have an up hill strugle as a Windows user. Whilst version 0.41 does fully support Windows, it will most likely be the last version to do so explicitly (although for an advanced user installation and usage on Windows still should be possible in the future.) Having said that, we have had Mesham 0.41 running fine on Windows - it just requires more setup as far fewer tools are included by default.
==== Requirements ====
#Java Run Time Environment from java.sun.com
#A C compiler and GNU maker - MinGW is a very good choice that we suggest, at http://www.mingw.org/
#An implementation of MPI (see the MPI section for further details.)
==== Install ====
To install Mesham really all the hard work has been done for you, but you will still need to configure the language.
*Unzip the language zip file and extract its contents to a directory - we would suggest c:\mesham but it really doesnt matter
*Now double click the installwindows.bat file - this will run the installation script, make sure you answer all the questions correctly (if you make an error just rerun it.) The script does a number of things. Firstly it automatically configures the compiler with your settings, secondly it configures the server and lastly it compiles the compiler. If you ever want to change the settings, you will need to rerun this configuration script. To install the server but not compile the compiler, just run installwindows.bat with the option -nocompile
*Lastly you will need to install the runtime library. There are a number of options here. The simplest is to use one of our prebuilt libraries. In the libraries directory there will be two zip files, one called win32binlibrary and the other win64binlibrary. Depending on whether your system is 32 or 64 bit (most commonly, Core and Core 2 processors are 64 bit) extract the contents of the zip file into the libraries directory. Then copy (or move) mesham.dll and pthreadGC2.dll into c:\windows\system32 . By the end of this step, you should have a file called libmesham.a in the libraries directory and both mesham.dll and pthreadGC2.dll in c:\windows\system32 . If you wish to compile the runtime library rather than use our prebuild ones, then read the readme file in the libraries\windows directory. Note at this stage that if you wish to distribute the executables you compile, the user must have mesham.dll and pthreadGC2.dll on their machine, but libmesham.a is required for compiling only.
*Thats all the hard work done! For ease of use, we would suggest adding mc.exe (the file just compiled, in compiler\bin) into your MSDOS path. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\mesham\compiler\bin then click OK. (The ; simply separates paths, this assumes you have installed the language in c:\mesham, if not change the path accordingly.)
Note - if you ever wish to move the location of the language files, you will need to rerun installwindows.bat file to reconfigure the setup. Secondly, there is a prebuild server runner called winrunserver.bat with some default options. If you dont want to build the items, you can run this, and then run compiler/wingui.bat for the Mesham into C viewer, without any other steps that will work fine.
==== Using Mesham on Windows ====
'''IMPORTANT''' you MUST make the MPI executable files visible to Mesham. To do this, goto the control panel, system, advanced tab, click on Environment Variables and under System variables scroll down to Path and edit it to add ;c:\program files\mpich2\bin then click OK. (The ; simply separates paths, this assumes you have installed MPICH2 and in c:\program files\mpich2, if not change the path accordingly.)
As long as you have made mc.exe and the MPI executable files visible via the path, then you can create Mesham source files and compile them anywhere. We will detail how to simply get yourself up and running in this text, consult the language manual for specific language details.
*First, run the server - this can be found in the server directory, and simply double click runserver.bat . The server will start up (can take a few moments) and will tell you when its ready
*Now, create a file - lets call it a.mesh. For the contents just put in:
var a:=34;
print[a,"\n"];
*Open a MSDOS terminal window, change the path to the directory where a.mesh is located and type mc a.mesh . The compiler should generate a.exe , and you can run it via MSDOS or by double clicking on it. There are lots of options you can do , type mc -h to find out
If there are any problems, you might need to configure/play around with your MPI implementation. Certainly with MPICH2 you might need to start the process manager, called smpd.exe in the mpich2/bin directory, and wmpiconfig.exe is required initially to register a username/password with the process manager.
If you wish only to view the C code, but not compile it, you can use the language C code viewer by double clicking windowsgui.bat in compiler\java
==== MPI for Windows ====
It doesnt matter which implementation you install. Having said that, it seems that the majority of implementations have been created with Unix in mind rather than Windows. MPICH certainly supports Windows, but you need MS Visual Studio to use the automated installer. To install MPICH for windows, make sure you have MS Visual Studio, Intel Fortran (free download from their site) and also Microsoft Visual C++ 2005 SP1 Redistributable Package (x86) from http://www.microsoft.com/downloads/thankyou.aspx?familyId=200b2fd9-ae1a-4a14-984d-389c36f85647&displayLang=en# Then download MPICH for windows at http://www.mcs.anl.gov/research/projects/mpich2/ under releases and install. This will work automatically via the MPICH installer.
There are other options too, OpenMPI might be a possibility via Cygwin.
== Differences between 0.41 and 0.50 ==
The current language version is 0.50, which has been used for the Gadget-2 and NASA PB work and much of the recent work on the language. It is hoped to get 0.50 available for download ASAP, there are some important differences between the two versions, some of the improvments in 0.50 include:
*Records may refer to themselves (via the reference record type) and be communicated as such
*Ability to use native C code
*64 bit Integer element type
*Gadget-2 extension types
*Communication Modes
*Default communication supported within par loops (MPMD style)
*Additional collection types
*Improved Preprocessor and support for including multiple source files
*Improved Error Handling Support
*Numerous bug fixes and other improvements
3f1b8f553ee2211ed7914b48e66b50489034426c
Functions
0
38
209
208
2019-04-15T15:44:29Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Syntax ==
function returntype name(arguments)
Where ''returntype'' is a type chain or ''void''.
== Semantics ==
The type of the variable depends on the pass semantics (by reference or value.) Broadly, all [[:Category:Element Types|element types]] types by themselves are pass by value and [[:Category:Compound Types|compound types]] are pass by reference; although this behaviour can be overridden by additional type information. Memory allocated onto the heap is pass by reference, static or stack frame memory is pass by value.
== Example ==
function Int add(var a:Int,var b:Int) {
return a + b;
};
This function takes two integers and will return their sum.
function void modify(var a:Int::heap) {
a:=88;
}
In this code example, the ''modify'' function will accept an integer variable but this is allocated on the heap (pass by reference.) The assignment will modify the value of the variable being passed in and will still be accessible once the function has terminated.
== Function prototypes ==
Instead of specifying the entire function, the programmer may just provide the prototype (no body) of the function and resolution will be deferred until link time. This mechanism is most popular for using functions written in other languages, however you must use the '''native''' modifier with native function prototypes.
=== Native function example ===
function native void myNativeFunction(var a:Int);
== The main function ==
Returns void and can have either 0 arguments or 2. If present, the first argument is number of command line interface parameters passed in, 2nd argument is a String array containing these. Location 0 of string array is the program name. The main function is the program entry point, it is fine for this not to be present in a Mesham code as it is then just assumed that that code is a library and only accessed via linkage.
''Since: Version 0.41b''
[[Category:Core Mesham]]
c03af1f16d2ef0f0131447ab3b4f44ce205343c7
Par
0
39
220
219
2019-04-15T15:44:29Z
Polas
1
10 revisions imported
wikitext
text/x-wiki
== Syntax ==
par p from a to b<br>
{<br>
par body<br>
};<br>
== Semantics ==
The parallel equivalent of the for loop, each iteration will execute concurrently on different processes. This allows the programmer to write code MPMD style, with the limitation that bounds ''a'' and ''b'' must be known during compilation. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' There is no guarantee as to the ranks of the processes involved within a par block and such as block will be distributed over the ranks which are most appropriate at that time.
== Example ==
#include <io>
function void main() {
var p;
par p from 0 to 9 {
print("Hello world\n");
};
};
The code fragment will involve 10 processes (0 to 9 inclusive) and each will display a ''Hello world'' message.
''Since: Version 0.41b''
[[Category:Parallel]]
3908eb26930ae997d9c2525ae27e75341f634582
Proc
0
40
230
229
2019-04-15T15:44:29Z
Polas
1
9 revisions imported
wikitext
text/x-wiki
== Syntax ==
proc n<br>
{<br>
process body<br>
}
where ''n'' is a variable or value known at compile time.
== Semantics ==
This will limit execution of a block to a certain process whose rank is guaranteed to be that specified.<br><br>
''Note:'' A variable declared within a proc block and allocated multiple will in fact, by inference, be allocated to the group of processes which contains a single process who's rank is the same as the proc block's.
== Example ==
#include <io>
function void main() {
proc 0 {
print("Hello from 0\n");
};
proc 1 {
print("hello from 1\n");
};
};
The code example will run on two processes, the first will display the message ''Hello from 0'', whilst the second will output the message ''hello from 1''.
''Since: Version 0.41b''
[[Category:Parallel]]
75a24e7b06d099010a8d14a6f8188a48c65f9f37
Sync
0
41
237
236
2019-04-15T15:44:30Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
sync name;
Where the optional ''name'' is a variable.
== Semantics ==
Will complete asynchronous communications and can act as a barrier involving all processes. This keyword is linked with default shared memory (RMA) communication and specific types such as the async communication type. If the programmer specifies an explicit variable name then this synchronisation will just occur for that variable, completing all outstanding communications for that specific variable only (without any global barrier.) In the absence of a variable then synchronisation (completing outstanding communications) shall occur for all variables followed by a global barrier. When asynchronous communication (via default shared memory RMA or explicit types) is involved, the value of the variables can only be guaranteed once a corresponding synchronisation (either with that variable or global, without any variable) has completed.
''Since: Version 0.5''
[[Category:Parallel]]
18c7fcbe7dd4a8aae380e11d709d77be57bd4ba8
Skip
0
42
240
239
2019-04-15T15:44:30Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Syntax ==
skip
== Semantics ==
Does nothing!
''Since: Version 0.41b''
[[Category:Sequential]]
a6518135018132abcab4e83ca85db2a4e376eb27
Operators
0
43
247
246
2019-04-15T15:44:30Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Operators ==
#+ Addition
#- Subtraction
#<nowiki>*</nowiki> Multiplication
#/ Division
#++ Pre or post fix addition
#-- Pre or post fix subtraction
#<< Bit shift to left
#>> Bit shift to right
#== Test for equality
#!= Test for inverse equality
#! Logical negation
#( ) Function call or expression parentheses
#[ ] Array element access
#. Member access
#< Test lvalue is smaller than rvalue
#> Test lvalue is greater than rvalue
#<= Test lvalue is smaller or equal to rvalue
#>= Test lvalue is greater or equal to rvalue
#?: Inline if operator
#||| Logical short circuit OR
#&& Logical short circuit AND
#| Logical OR
#& Logical AND
#+= Plus assignment
#-= Subtraction assignment
#<nowiki>*</nowiki>= Multiplication assignment
#/= Division assignment
#%= Modulus assignment
[[Category:Core Mesham]]
a259ab2da783ce5d91abe55f46ce697bbe03ee9f
Category:Element Types
14
44
249
248
2019-04-15T15:44:30Z
Polas
1
1 revision imported
wikitext
text/x-wiki
[[Category:Type Library]]
59080a51ca9983880b93aaf73676382c72785431
Int
0
45
256
255
2019-04-15T15:44:30Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
Int
== Semantics ==
A single whole, 32 bit, number. This is also the type of integer constants.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Int;
var b:=12;
};
In this example variable ''i'' is explicitly declared to be of type ''Int''. On line 2, variable ''b'' is declared and via type inference will also be of type ''Int''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
bdaff79c7868cffdc1ffc373426196718021a549
Template:ElementTypeCommunication
10
46
262
261
2019-04-15T15:44:30Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
When a variable is assigned to another, depending on where each variable is allocated to, there may be communication required to achieve this assignment. Table \ref{tab:eltypecomm} details the communication rules in the assignment \emph{assignmed variable := assigning variable}. If the communication is issued from MPMD programming style then this will be one sided. The default communication listed here is guaranteed to be safe, which may result in a small performance hit.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| individual processes write values to process i
|-
| multiple[]
| single[on[i]]
| individual processes read values from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
==== Communication Example ====
var a:Int;
var b:Int :: allocated[single[on[2]]];
var p;
par p from 0 to 3 {
if (p==2) b:=p;
a:=b;
sync;
};
This code will result in each process reading the value of ''b'' from process 2 and then writing this into ''a''. As already noted, in absence of allocation information the default of allocating to all processes is used. In this example the variable ''a'' can be assumed to additionally have the type ''allocated[multiple]''. Note that communication groups are the same as multiple in this context and share the same semantics. All variables marked multiple are private to their containing process.
8e16a709a2e9cca763c10e3199f020e2ec9d2bda
Float
0
47
268
267
2019-04-15T15:44:30Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
Float
== Semantics ==
A 32 bit floating point number
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Float;
};
In this example variable ''i'' is explicitly declared to be of type ''Float''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
a2465b2c1f8ed114a674a125799f7da2b547712a
Double
0
48
274
273
2019-04-15T15:44:31Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
Double
== Semantics ==
A double precision 64 bit floating point number. This is the type given to constant floating point numbers that appear in program code.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Double;
};
In this example variable ''i'' is explicitly declared to be of type ''Double''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
b7fe5a9eb26c4db5128d1512334b45663c564529
Bool
0
49
279
278
2019-04-15T15:44:31Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
Bool
== Semantics ==
A true or false value
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Bool;
var x:=true;
};
In this example variable ''i'' is explicitly declared to be of type ''Bool''. Variable ''x'' is declared to be of value ''true'' which via type inference results in its type also becomming ''Bool''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
61cd134a6211d42a250c7a78545120a531d7f9c5
Char
0
50
285
284
2019-04-15T15:44:31Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
Char
== Semantics ==
An 8 bit ASCII character
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Char;
var r:='a';
};
In this example variable ''i'' is explicitly declared to be of type ''Char''. Variable ''r'' is declared and found, via type inference, to also be type ''Char''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
fed9f001ad7720d80d580b97ffdb7093490cce8b
String
0
51
291
290
2019-04-15T15:44:32Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
String
== Semantics ==
A string of characters. All strings are immutable, concatenation of strings will in fact create a new string.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:String;
var p:="Hello World!";
};
In this example variable ''i'' is explicitly declared to be of type ''String''. Variable ''p'' is found, via type inference, also to be of type ''String''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
7ab2bc8ea1834a195f690040b72929215f16644e
File
0
52
297
296
2019-04-15T15:44:32Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
File
== Semantics ==
A file handle with which the programmer can use to reference open files on the file system
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:File;
};
In this example variable ''i'' is explicitly declared to be of type ''File''.
''Since: Version 0.41b''
== Communication ==
It is not currently possible to communicate file handles due to operating system constraints.
[[Category:Element Types]]
[[Category:Type Library]]
92b15263b845093ec2b1258c275a9fe25ea23606
Long
0
53
302
301
2019-04-15T15:44:32Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
Long
== Semantics ==
A long 64 bit number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Long;
};
In this example variable ''i'' is explicitly declared to be of type ''Long''.
''Since: Version 0.41b''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
63d47e595b62f0bad6e8c5cdff2e6e0c1f63073c
Category:Attribute Types
14
54
305
304
2019-04-15T15:44:32Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Allocation Types
14
55
308
307
2019-04-15T15:44:32Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Collection Types
14
56
311
310
2019-04-15T15:44:32Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Primitive Communication Types
14
57
315
314
2019-04-15T15:44:32Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
Primitive communication types ensure that all, safe, forms of communication supported by MPI can also be represented in Mesham. However, unlike the shared variable approach adopted elsewhere, when using primitive communication the programmer is responsible for ensuring communications complete and match up.
[[Category:Compound Types]]
5d0ec50f91cba0c362a1408df596fd93896dfa14
Category:Communication Mode Types
14
58
319
318
2019-04-15T15:44:32Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
By default, communication in Mesham is blocking (i.e. will not continue until a send or receive has completed.) Standard sends will complete either when the message has been sent to the target processor or when it has been copied into a buffer, on the source machine, ready for sending. In most situations the standard send is the most efficient, however in some specialist situations more performance can be gained by overriding this.
By providing these communication mode types illustrates a powerful aspect of type based parallelism. The programmer can use the default communication method initially and then, to fine tune their code, simply add extra types to experiment with the performance of these different communication options.
[[Category:Compound Types]]
3d0877f21ad8c741348088de810ac5a594bb092a
Category:Partition Types
14
59
323
322
2019-04-15T15:44:32Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
Often in data parallel HPC applications the programmer wishes to split up data in some way, shape or form. This is often a difficult task, as the programmer must consider issues such as synchronisation and uneven distributions. Mesham provides types to allow for the partitioning and distribution of data, the programmer needs just to specify the correct type and then behind the scenes the compiler will deal with all the complexity via the type system. It has been found that this approach works well, not just because it simplifies the program, but also because some of the (reusable) codes associated with parallelization types are designed beforehand by expert system programmers. These types tend be better optimized by experts than the codes written directly by the end programmers.
When the programmer partitions data, the compiler splits it up into blocks (an internal type of the compiler.) The location of these blocks depends on the distribution type used - it is possible for all the blocks to be located on one process, on a few or on all and if there are more blocks than processes they can always ``wrap around.'' The whole idea is that the programmer can refer to separate blocks without needing to worry about exactly where they are located, this means that it's very easy to change the distribution method to something more efficient later down the line if required.
The programmer can think of two types of partitioning - partitioning for distribution and partitioning for viewing. The partition type located inside the allocated type is the partition for distribution (and also the default view of the data.) However, if the programmer wishes to change the way they are viewing the blocks of data, then a different partition type can be coerced. This will modify the view of the data, but NOT the underlying way that the data is allocated and distributed amongst the processes. Of course, it is important to avoid an ambiguous combination of partition types. In order to access a certain block of a partition, simply use array access [ ] i.e. ''a[3]'' will access the 3rd block of variable a.
In the code ''var a:array[Int,10,20] :: allocated[A[m] :: single[D[]]]'', the variable ''a'' is declared to be a 2d array size 10 by 20, using partition type A and splitting the data into ''m'' blocks. These blocks are distributed amongst the processes via distribution method ''D''.
In the code fragment ''a:(a::B[])'', the partition type ''B'' is coerced with the type of variable ''a'', and the view of the data changes from that of ''A'' to 'B''.
[[Category:Compound Types]]
66eaf2d4c0434d9b6720a800483533a10b2f3796
Category:Distribution Types
14
60
326
325
2019-04-15T15:44:32Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Category:Composition Types
14
61
329
328
2019-04-15T15:44:32Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Compound Types]]
81db6c33502c8ba83977eccdbe388b25019bfd95
Allocated
0
62
335
334
2019-04-15T15:44:33Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
allocated[type]
Where ''type'' is optional
== Semantics ==
This type sets the memory allocation of a variable, which may not be modified once set.
== Example ==
function void main() {
var i: Int :: allocated[];
};
In this example the variable ''i'' is an integer. Although the ''allocated'' type is provided, no addition information is given and as such Mesham allocates it to each processor.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
b24412163f3b57beb406f819cf40c539bc63f5fa
Multiple
0
63
341
340
2019-04-15T15:44:33Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
multiple[type]
Where ''type'' is optional
== Semantics ==
Included in allocated will (with no arguments) set the specific variable to have memory allocated to all processes within current scope. This sets the variable to be private (i.e. no other processes can view it) to its allocated process.
== Example ==
function void main() {
var i: Int :: allocated[multiple[]];
};
In this example the variable ''i'' is an integer, allocated to all processes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
bd9759d747a54e8e9bfb964a0ddf3d4a0e430ba0
Commgroup
0
64
348
347
2019-04-15T15:44:33Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
commgroup[process list]
== Semantics ==
Specified within the multiple type, will limit memory allocation (and variable communication) to the processes within the list given in this type's arguments. This type will ensure that the communications group processes exist. All variables marked in this way are private to their local processes.
== Example ==
function void main() {
var i:Int :: allocated[multiple[commgroup[1,3]]];
};
In this example there are a number processes, but only 1 and 3 have variable ''i'' allocated to them. This type would have also ensured that process two (and zero) exists for there to be a process three.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
42b4ba047e27696deecdee70c89e2b28bd85583e
Single
0
65
353
352
2019-04-15T15:44:33Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
single[type]
single[on[process]]
where ''type'' is optional
== Semantics ==
Will allocate a variable to a specific process. Most commonly combined with the ''on'' type which specifies the process to allocated to, but not required if this can be inferred. Additionally the programmer will place a distribution type within ''single'' if dealing with distributed arrays.
== Example ==
function void main() {
var i:Int :: allocated[single[on[1]]];
};
In this example variable ''i'' is declared as an integer and allocated on process 1.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
6e74dbec9bd0f6e55312f76ea5613a2cb312e5b4
Const
0
66
358
357
2019-04-15T15:44:33Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
const[ ]
== Semantics ==
Enforces the read only property of a variable.
== Example ==
function void main() {
var a:Int;
a:=34;
a:(a :: const[]);
a:=33;
};
The code in the above example will produce an error. Whilst the first assignment (''a:=34'') is legal, on the subsequent line the programmer has modified the type of ''a'' to be that of ''a'' combined with the type ''const''. The second assignment is attempting the modify a now read only variable and will fail.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
02b303fc0decec05fb087ac6a22055e71f02c14c
Tempmem
0
67
362
361
2019-04-15T15:44:33Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Syntax ==
tempmem[ ]
== Semantics ==
Used to inform the compiler that the programmer is happy that a call (usually communication) will use temporary memory. Some calls can not function without this and will give an error, others will work more efficiently with temporary memory but can operate without at a performance cost. This type is provided because often memory is at a premium, with applications running towards at their limit. It is therefore useful for the programmer to indicate whether or not using extra, temporary, memory is allowed.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
47a73f661f93b39324cc395041a14797ffe84a76
Share
0
68
367
366
2019-04-15T15:44:33Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
share[name]
== Semantics ==
This type allows the programmer to have two variables sharing the same memory (the variable that the share type is applied to uses the memory of that specified as arguments to the type.) This is very useful in HPC applications as often processes are running at the limit of their resources. The type will share memory with that of the variable ''name'' in the above syntax. In order to keep this type safe, the sharee must be smaller than or of equal size to the memory chunk, this is error checked.
== Example ==
function void main() {
var a:Int::allocated[multiple[]];
var c:Int::allocated[multiple[] :: share[a]];
var e:array[Int,10]::allocated[single[on[1]]];
var u:array[Char,12]::allocated[single[on[1]] :: share[e]];
};
In the example above, the variables ''a'' and ''c'' will share the same memory. The variables ''e'' and ''u'' will also share the same memory. There is some potential concern that this might result in an error - as the size of ''u'' array is 12, and size of ''e'' array is only 10. If the two arrays have different types then this size will be checked dynamically - as an int is 32 bit and a char only 8 then this sharing of data would work in this case.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
865ac55f449ec32694ba7760a025ce93f230e16d
Extern
0
69
372
371
2019-04-15T15:44:33Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
extern[]
== Semantics ==
Provided as additional allocation type information, this tells the compiler NOT to allocate memory for the variable as this has been already done externally.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
6756c1cd74419a93ab7119eaed8b0055ef7258ff
Directref
0
70
378
377
2019-04-15T15:44:34Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
directref[ ]
== Semantics ==
This tells the compiler that the programmer might use this variable outside of the language (e.g. Via embedded C code) and not to perform certain optimisations which might not allow for this.
== Example ==
function void main() {
var pid:Int :: allocated[multiple[]] :: directref[];
};
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Attribute Types]]
62f811435d57f522da752efa1e30827f4b9b8749
Array
0
71
390
389
2019-04-15T15:44:34Z
Polas
1
11 revisions imported
wikitext
text/x-wiki
== Syntax ==
array[type,d<sub>1</sub>,d<sub>2</sub>,...,d<sub>n</sub>]
== Semantics ==
An array, where ''type'' is the element or record type, followed by the dimensions. The programmer can provide any number of dimensions to create an n dimension array. Default is row major allocation (although this can be overridden via types.) In order to access an element of an array, the programmer uses the traditional ''name[index]'' syntax.<br><br>
''Note:'' If the dimensions are omitted then it is assumed to be a one dimensional array of infinite size without any explicit memory allocation (i.e. data provided into a function.) Be aware, without any size information then it is not possible to bounds check indexes.
=== Default typing ===
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[heap]]
* [[onesided]]
== Communication ==
When an array variable is assigned to another, depending on where each variable is allocated to, there may be communication to achieve this assignment. The table details the communication rules for this assignment ''assigned variable := assigning variable''. As with the element type, default communication of arrays is safe.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| multiple[]
| multiple[]
| local assignment
|-
| single[on[i]]
| multiple[]
| individual processes write values to process i
|-
| multiple[]
| single[on[i]]
| individual processes read values from process i
|-
| single[on[i]]
| single[on[i]]
| local assignment where i==i
|-
| single[on[i]]
| single[on[j]]
| communication from j to i where i!=j
|}
== Example ==
#include <io>
#include <string>
function void main() {
var a:array[String,2];
a[0]:="Hello";
a[1]:="World";
print(itostring(a[0])+" "+itostring(a[1])+"\n");
};
This example will declare variable ''a'' to be an array of 2 Strings. Then the first location in the array will be set to ''Hello'' and the second location set to ''World''. Lastly the code will display on stdio both these array string locations followed by newline.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
254a5d47d7945fa88840a4d053a413f81238e9ac
Row
0
72
396
395
2019-04-15T15:44:34Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
row[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In row major allocation the first dimension is the most major and the last most minor.
== Example ==
function void main() {
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
};
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
1a1dd5c667218e633b48ebc4dd960d90c8a2363a
Col
0
73
402
401
2019-04-15T15:44:34Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
col[ ]
== Semantics ==
In combination with the array, the programmer can specify whether allocation is row or column major. This allocation information is provided in the allocation type. In column major allocation the first dimension is the least major and last dimension most major
== Example ==
function void main() {
var a:array[Int,10,20] :: allocated[col[] :: multiple[]];
a[1][2]:=23;
(a :: row)[1][2]:=23;
};
Where the array is column major allocation, but the programmer has overridden this (just for the assignment) in line 3. If one array of allocation copies to another array of different allocation then transposition will be performed automatically in order to preserve indexes.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Collection Types]]
5ceda5448223aaecc60dc57d8341983da56a52cb
Channel
0
74
408
407
2019-04-15T15:44:35Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
channel[a,b]
Where ''a'' and ''b'' are both distinct processes which the channel will connect.
== Semantics ==
The ''channel'' type will specify that a variable is a channel from process ''a'' (sender) to process ''b'' (receiver.) Normally this will result in synchronous communication, although if the ''async'' type is used then asynchronous communication is selected instead. Note that channel is unidirectional, where process a sends and b receives, NOT the otherway around.<br><br>
''Note:'' By default (no further type information) all channel communication is blocking using standard send.<br>
''Note:'' If no allocation information is specified with the channel type then the underlying variable will not be assigned any memory - it is instead an abstract connection in this case.
== Example ==
function void main() {
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 2 {
(x::channel[0,2]):=193;
var hello:=(x::channel[0,2]);
};
};
In this case, ''x'' is a channel between processes 0 and 2. In the par loop process 0 sends the value 193 to process 2. Then the variable ''hello'' is declared and process 2 will receive this value.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
017370ae8fb49bea2ebec6633a0c741397e8921f
Pipe
0
75
412
411
2019-04-15T15:44:35Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Syntax ==
pipe[a,b]
== Semantics ==
Identical to the [[Channel]] type, except pipe is bidirectional rather than unidirectional.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
f691875ec9792acb345b209a1b3a8266ef975af4
Onesided
0
76
419
418
2019-04-15T15:44:35Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
onesided[a,b]
== Syntax ==
onesided[]
== Semantics ==
Identical to the [[Channel]] type, but will perform onesided communication rather than p2p. This form of communication is less efficient than p2p, but there are no issues such as deadlock to consider. This type is connected to the [[sync]] keyword, which allows for the programmer to barrier synchronise for ensuring up to date values. The current memory model is Concurrent Read Concurrent Write (CRCW.)<br><br>
''Note:'' This is the default communication behaviour in the absence of further type information.
== Example ==
function void main() {
var i:Int::onesided::allocated[single[on[2]]];
proc 0 {i:=34;};
sync i;
};
In the above code example variable ''i'' is declared to be an Integer using onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two. At line three barrier synchronisation will occur on variable ''i'', which in this case will involve processes zero and two ensuring that the value has been written fully and is available.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
7c0ff4ce4c8a57a8d60c76c1158b2439b77f5bcc
Reduce
0
77
426
425
2019-04-15T15:44:35Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
reduce[root,operation]
== Semantics ==
All processes in the group will combine their values together at the root process and then the operation will be performed on them.
== Example ==
function void main() {
var t:Int::allocated[multiple[]];
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::reduce[1,"max"];
x:=p;
t:=x;
};
};
In this example, ''x'' is to be reduced, with the root as process 1 and the operation will be to find the maximum number. In the first assignment ''x:=p'' all processes will combine their values of ''p'' and the maximum will be placed into process 1's ''x''. In the second assignment ''t:=x'' processes will combine their values of ''x'' and the maximum will be placed into process 1's ''t''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
760fffc606dd80b0b556dd9cef544a44eb693696
Broadcast
0
78
432
431
2019-04-15T15:44:35Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
broadcast[root]
== Semantics ==
This type will broadcast a variable amongst the processes, with the root (source) being PID=root. The variable concerned must either be allocated to all or a group of processes (in the later case communication will be limited to that group.)
== Example ==
function void main() {
var a:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(a::broadcast[2]):=23;
};
};
In this example process 2 (the root) will broadcast the value 23 amongst the processes, each process receiving this value and placing it into their copy of ''a''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
03ad9faa79774e87bcc4735feb12340962787ef9
Gather
0
79
437
436
2019-04-15T15:44:35Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
gather[elements,root]
== Semantics ==
Gather a number of elements (equal to ''elements'') from each process and send these to the root process.
== Example ==
function void main() {
var x:array[Int,12] :: allocated[single[on[2]]];
var r:array[Int,3] :: allocated[multiple[]];
var p;
par p from 0 to 3 {
(x::gather[3,2]):=r;
};
};
In this example, the variable ''x'' is allocated on the root process (2) only. Whereas ''r'' is allocated on all processes. In the assignment all three elements of ''r'' are gathered from each process and sent to the root process (2) and then placed into variable ''x'' in the order defined by the source's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
e4a03011d4a685bd754193f6ff3f264bdc0e5997
Scatter
0
80
442
441
2019-04-15T15:44:35Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
scatter[elements,root]
== Semantics ==
Will send a number of elements (equal to ''elements'') from the root process to all other processes.
== Example ==
function void main() {
var x:array[Int,3]::allocated[multiple[]];
var r:array[Int,12]::allocated[multiple[]];
var p;
par p from 0 to 3 {
x:(x::scatter[3,1]);
x:=r;
};
};
In this example, three elements of array ''r'', on process 1, are scattered to each other process and placed in their copy of ''r''.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
44d165b64b97e8f9675dc560f2c6ff660a4623e7
Alltoall
0
81
447
446
2019-04-15T15:44:36Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
alltoall[elementsoneach]
== Semantics ==
Will cause each process to send some elements (the number being equal to ''elementsoneach'') to every other process in the group.
== Example ==
function void main() {
x:array[Int,12]::allocated[multiple[]];
var r:array[Int,3]::allocated[multiple[]];
var p;
par p from 0 to 3 {
(x:alltoall[3]):=r;
};
};
In this example each process sends every other process three elements (the elements in its ''r''.) Therefore each process ends up with twelve elements in ''x'', the location of each is based on the source processes's PID.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
715787adb1d21ed672dc76d5a4e824861dc7cc3c
Allreduce
0
82
454
453
2019-04-15T15:44:36Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
allreduce[operation]
== Semantics ==
Similar to the [[reduce]] type, but the reduction will be performed on each process and the result is also available to all.
== Example ==
function void main() {
var x:Int::allocated[multiple[]];
var p;
par p from 0 to 3 {
(x::allreduce["min"]):=p;
};
};
In this case all processes will perform the reduction on ''p'' and all processes will have the minimum value of ''p'' placed into their copy of ''x''.
''Since: Version 0.41b''
== Supported operations ==
{{ Template:ReductionOperations }}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
f561cbfab20c8d3e1ea1f794556cb53f7ab1cbeb
Async
0
83
460
459
2019-04-15T15:44:36Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
async[ ]
== Semantics ==
This type will specify that the communication to be carried out should be done so asynchronously. Asynchronous communication is often very useful and, if used correctly, can increase the efficiency of some applications (although care must be taken.) There are a number of different ways that the results of asynchronous communication can be accepted, when the asynchronous operation is honoured then the data is placed into the variable, however when exactly the operation will be honoured is none deterministic and care must be taken if using dirty values.
The [[sync]] keyword allows the programmer to either synchronise ALL or a specific variable's asynchronous communication. The programmer must ensure that all asynchronous communications have been honoured before the process exits, otherwise bad things will happen!
== Examples ==
function void main() {
var a:Int::allocated[multiple[]] :: channel[0,1] :: async[];
var p;
par p from 0 to 2 {
a:=89;
var q:=20;
q:=a;
sync q;
};
};
In this example, ''a'' is declared to be an integer, allocated to all processes, and to act as an asynchronous channel between processes 0 and 1. In the par loop, the assignment ''a:=89'' is applicable on process 0 only, resulting in an asynchronous send. Each process executes the assignment and declaration ''var q:=20'' but only process 1 will execute the last assignment ''q:=a'', resulting in an asynchronous receive. Each process then synchronises all the communications relating to variable ''q''.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: async[];
var c:Int::allocated[single[on[3]]] :: async[];
a:=b;
c:=a;
b:=c;
sync;
};
This example demonstrates the use of the ''async'' type in terms of default shared variable style communication. In the assignment ''a:=b'', processor 2 will issue an asynchronous send and processor 1 will issue a synchronous (standard) receive. The second assignment, ''c:=a'', processor 3 will issue an asynchronous receive and processor 1 a synchronous send. In the last assignment, ''b:=c'', both processors (3 and 2) will issue asynchronous communication calls (send and receive respectively.) The last line of the program will force each process to wait and complete all asynchronous communications.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
07d00f232b51e34fd49c4ae7b036005a83780309
Blocking
0
84
466
465
2019-04-15T15:44:36Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
blocking[ ]
== Semantics ==
Will force P2P communication to be blocking, which is the default setting
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: blocking[];
a:=b;
};
The P2P communication (send on process 2 and receive on process 1) resulting from assignment ''a:=b'' will force program flow to wait until it has completed. The ''blocking'' type has been omitted from the that of variable ''a'', but is used by default.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
1a916b2a9e2c79154094eb7f50e9f9b5cc5d2676
Nonblocking
0
85
472
471
2019-04-15T15:44:36Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
nonblocking[ ]
== Semantics ==
This type will force P2P communication to be nonblocking. In this mode communication (send or receive) can be thought of as having two distinct states - start and finish. The nonblocking type will start communication and allows program execution to continue between these two states, whilst blocking (standard) mode requires the finish state has been reached before continuing. The [[sync]] keyword can be used to force the program to wait until finish state has been reached.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]] :: nonblocking[];
var b:Int::allocated[single[on[2]]];
a:=b;
sync a;
};
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking receive whilst process 2 will issue a blocking send. All nonblocking communication with respect to variable ''a'' is completed by the keyword ''sync a''.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
653752188a33b60292d65aa33576345130c98de8
Standard
0
86
478
477
2019-04-15T15:44:36Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
standard[ ]
== Semantics ==
This type will force P2P sends to follow the standard form of reaching the finish state either when the message has been delivered or it has been copied into a buffer on the sender. This is the default applied if further type information is not present.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]] :: nonblocking[] :: standard[];
var b:Int::allocated[single[on[2]]] :: standard[];
a:=b;
};
In the P2P communication resulting from assignment ''a:=b'', process 1 will issue a non-blocking standard receive whilst process 2 will issue a blocking standard send.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
594fcde910d32d7bd6e0003296ff56446dd17c9d
Buffered
0
87
485
484
2019-04-15T15:44:37Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
buffered[buffersize]
== Semantics ==
This type will ensure that P2P Send will reach the finish state (i.e. complete) when the message is copied into a buffer of size ''buffersize'' bytes. At some later point the message will be sent to the target process. If ''buffersize'' is not provided then a default is used. This type associates with the [[sync]] keyword which will wait until the message has been copied out of the buffer.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: buffered[500];
var c:Int::allocated[single[on[2]]] :: buffered[500] :: nonblocking[];
a:=b;
a:=c;
};
The P2P communication resulting from assignment ''a:=b'', process 2 will issue a (blocking) buffered send (buffer size 500 bytes), which will complete once the message has been copied into this buffer. The assignment ''a:=c'', process 1 will issue another send this time also buffered but nonblocking where program flow will continue between the start and finish state of communication. The finish state will be reached once the value of variable ''c'' has been copied into a buffer held on process 2.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
50c6962feabfcd511f17efec01dec17a438123d3
Ready
0
88
492
491
2019-04-15T15:44:37Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
ready[ ]
== Semantics ==
The ''ready'' type will force P2P Send to start only if a matching receive has been posted by the target processor. When used in conjunction with the [[nonblocking]] type, communication start will wait until a matching receive is posted. This type acts as a form of handshaking and can improve performance in some uses.
== Example ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: ready[];
var c:Int::allocated[single[on[2]]] :: ready[] :: nonblocking[];
a:=b;
a:=c;
};
The send of assignment ''a:=b'' will only begin once the receive from process 1 has been issued. With the statement ''a:=c'' the send, even though it is [[nonblocking]], will only start once a matching receive has been issued too.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
21bdd8ab0eb0a389b37a343c45f73493cbec3f78
Synchronous
0
89
498
497
2019-04-15T15:44:37Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
synchronous[]
== Semantics ==
By using this type, the send of P2P communication will only reach the finish state once the message has been received by the target processor.
== Examples ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[single[on[2]]] :: synchronous[] :: blocking[];
var c:Int::allocated[single[on[2]]] :: synchronous[] :: nonblocking[];
a:=b;
a:=c;
};
The send of assignment ''a:=b'' (and program execution on process 2) will only complete once process 1 has received the value of ''b''. The send involved with the second assignment is synchronous [[nonblocking]] where program execution can continue between the start and finish state, the finish state only reached once process 1 has received the message (value of ''c''.) Incidentally, as already mentioned, the [[blocking]] type of variable ''b'' would have been chosen by default if omitted (as in previous examples.)
var a:Int :: allocated[single[on[0]];
var b:Int :: allocated[single[on[1]];
a:=b;
a:=(b :: synchronous[]);
The code example above demonstrates the programmer's ability to change the communication send mode just for a specific assignment. In the first assignment, process 1 issues a [[blocking]] [[standard]] send, however in the second assignment the communication mode type ''synchronous'' is coerced with the type of ''b'' to provide a [[blocking]] synchronous send just for this assignment only.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Communication Mode Types]]
2828c60e03ad41895edf8f33973bce097fd1e6f2
Horizontal
0
90
510
509
2019-04-15T15:44:38Z
Polas
1
11 revisions imported
wikitext
text/x-wiki
== Syntax ==
horizontal[blocks]
Where ''blocks'' is number of blocks to partition into.
== Semantics ==
This type will split up data horizontally into a number of blocks. If the split is uneven then the extra data will be distributed amongst the blocks in the most efficient way in order to keep the blocks a similar size. The figure below illustrates horizontally partioning an array into three blocks.
<center>[[Image:horiz.jpg|Horizontal Partition of an array into three blocks via type oriented programming]]</center>
== Communication ==
{{OneDimPartitionCommunication}}
== Dot operators ==
Horizontal blocks also support a variety of dot operators to provide meta data
{{OneDimPartitionDotOperators}}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
dd8e8fe91aba1b876e11458d10956bf81264b378
Vertical
0
91
516
515
2019-04-15T15:44:38Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
vertical[blocks]
== Semantics ==
Same as the [[horizontal]] type but will partition the array vertically. The figure below illustrates partitioning an array into 4 blocks vertically.
<center>[[Image:vert.jpg|Vertical Partition of an array into four blocks via type oriented programming]]</center>
== Communication ==
{{OneDimPartitionCommunication}}
== Dot operators ==
Vertical blocks also support a variety of dot operators to provide meta data
{{OneDimPartitionDotOperators}}
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Partition Types]]
ff782b0995c5f24e4f31410874d51a8ae4ddb72d
File:Horiz.jpg
6
92
518
517
2019-04-15T15:44:38Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Horizontal partitioning of an array via the horizontal type
574c772bfc90f590db956c081c201e3ab506c94b
File:Vert.jpg
6
93
520
519
2019-04-15T15:44:38Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Vertical partitioning of an array via the vertical type
bf828b129f970f21341fb2357d36f32a993c68be
File:Evendist.jpg
6
94
522
521
2019-04-15T15:44:38Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Even distribution of 10 blocks over 4 processors
1831c950976897aab248fe6058609023f0edb3bd
Evendist
0
95
528
527
2019-04-15T15:44:38Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
evendist[]
== Semantics ==
Will distribute data blocks evenly amongst the processes. If there are too few processes then the blocks will wrap around, if there are too few blocks then not all processes will receive a block. The figure below illustrates even distribution of 10 blocks of data over 4 processes.
<center>[[Image:evendist.jpg|Even distribution of 10 blocks of data over 4 processors using type oriented programming]]</center>
== Example ==
function void main() {
var a:array[Int,16,16] :: allocated[row[] :: horizontal[4] :: single[evendist[]]];
var b:array[Int,16,16] :: allocated[row[] :: vertical[4] :: single[evendist[]]];
var e:array[Int,16,16] :: allocated[row[] :: single[on[1]]];
var p;
par p from 0 to 3 {
var q:=b[p][2][3];
var r:=a[p][2][3];
var s:=b :: horizontal[][p][2][3];
};
a:=e;
};
In this example (which involves 4 processors) there are three [[array|arrays]] declared, ''a'', ''b'' and ''e''. Array ''a'' is [[horizontal|horizontally]] partitioned into 4 blocks, evenly distributed amongst the processors, whilst ''\emph b'' is [[vertical|vertically]] partitioned into 4 blocks and also evenly distributed amongst the processors. Array ''e'' is located on processor 1 only. All arrays are allocated [[row]] major. In the [[par]] loop, variables ''q'', ''r'' and ''s'' are declared and assigned to be values at specific points in a processor's block. Because ''b'' is partitioned [[vertical|vertically]] and ''a'' [[horizontal|horizontally]], variable ''q'' is the value at ''b's'' block memory location 11, whilst ''r'' is the value at ''a's'' block memory location 35. On line 9, variable ''s'' is the value at ''b's'' block memory location 50 because, just for this expression, the programmer has used the [[horizontal]] type to take a horizontal view of the distributed array. It should be noted that in line 9, it is just the view of data that is changed, the underlying data allocation is not modified.
In line 11 the assignment ''a:=e'' results in a scatter as per the definition of its declared type.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
a3d17fd7606dcd26e3fbe842d3e71a2dfa31e0f8
Record
0
96
536
535
2019-04-15T15:44:38Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
== Syntax ==
record[name<sub>1</sub>,type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,.....,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The ''record'' type allows the programmer to combine ''d'' attributes into one, new type. There can be any number of names and types inside the record type. A record type is very similar to a typedef structure in C. To access the member of a record use the dot, ''.''
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
== Example ==
function void main() {
typevar complex ::= record["r",Float,"i",Float];
var a:array[complex, 10];
var number:complex;
var pixel : record["r",Int,"g",Int,"b",Int];
a[1].r:=8.6;
number.i:=3.22;
pixel.b:=128;
};
In the above example, ''complex'' is declared as a [[Type_Variables|type variable]] to be a complex number. This is then used as the type chain for ''a'' which is an [[array]] and ''number''. Using records in this manner can be useful, although the other way is just to include directly in the type chain for a variable such as declaring the ''pixel'' variable. Do not get confused between the difference between ''complex'' (a type variable existing during compilation only) and ''pixel'' (a normal data variable which exists at runtime.) In the last two lines assignment occurs to the declared variables.
''Since: Version 0.41b''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
efc39c9403ee2e1e18968e6cc3d099670c7d384d
Referencerecord
0
97
543
542
2019-04-15T15:44:39Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
referencerecord[name<sub>1</sub>, type<sub>1</sub>,name<sub>2</sub>,type<sub>2</sub>,...,name<sub>d</sub>,type<sub>d</sub>]
== Semantics ==
The [[record]] type may NOT refer to itself (or other records) where as reference records support this, allowing the programmer to create data structures such as linked lists and trees. There are some added complexities of reference records, such as communicating them (all links and linking nodes will be communicated with the record) and freeing the data (garbage collection.) This results in a slight performance hit and is the reason why the record concept has been split into two types.
=== Default typing ===
* [[allocated]]
* [[multiple]]
* [[heap]]
''Currently communication is not available for reference records, this will be fixed at some point in the future.''
== Example ==
#include <io>
#include <string>
typevar node;
node::=referencerecord["prev",node,"Int",data,"next",node];
function void main() {
var head:node;
head:=null;
var i;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=head;
if (head!=null) head.prev:=newnode;
head:=newnode;
};
while (head != null) {
print(itostring(head.data)+"\n");
head:=head.next;
};
};
In this code example a doubly linked list is created, and then its contents read node by node.
''Since: Version 0.5''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Composition Types]]
93fccbcb8408dc735075a3cd715e43a3828471e3
Category:Types
14
98
551
550
2019-04-15T15:44:39Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
A type can follow a number of different syntactic forms. The abstract syntax of a type is detailed in the table below. Where ''elementtype'' is defined in the type library, ''varname'' represents the current type of a variable and ''type :: type'' represents type combination to coerce into a new supertype.
type = elementtype
| compoundtype
| type :: type
| varname
All element types start with a capitalised first letter and there must be at least one element type per type chain. Compound types start with a small case and contain a number of different subcategories of type
compoundtype = attribute
| allocation
| collection
| primitive communication
| communication mode
| partition
| distribution
| composition
Types may be referred to with or without arguments, it is therefore optional to specify square braces, ''[]'' after a type with or without data in.
== Declarations ==
=== Syntax ===
var name:type;
Where ''type'', as explained, is an ''elementtype'', a ''compoundtype'', variable name or ''type :: type''. The operator '':'' sets the type and ''::'' is type combination (coercion).
=== Semantics ===
This will declare a variable to be a specific type. Type combination is subject to a number of semantic rules. If no type information is given, then the type will be found via inference where possible.
=== Examples ===
function void main() {
var i:Int :: allocated[multiple[]];
};
Here the variable ''i'' is declared to be integer, allocated to all processes. There are three types included in this declaration, the element type [[Int]] and the compound types [[allocated]] and [[multiple]]. The type [[multiple]] is provided as an argument to the allocation type [[allocated]], which is then combined with the [[Int]] type.
function void main() {
var m:String;
};
In this example, variable ''m'' is declared to be of type [[String]]. For programmer convenience, by default, the language will automatically assume to combine this with ''allocated[multiple]'' if such allocation type is missing.
== Statements ==
=== Syntax ===
name:type;
=== Semantics ===
Will modify the type of an already declared variable via the '':'' operator. Note, allocation information (via the ''allocation'' type) may not be changed. Type modification such as this binds to the current block, the type is reverted back to its previous value once that block has been left.
=== Examples ===
function void main() {
var i:Int :: allocated[multiple[]];
i:=23;
i:i :: const[];
};
Here the variable ''i'' is declared to be [[Int|integer]], [[allocated]] to all processes and its value is set to 23. Later on in the code the type is modified to set it also to be [[const|constant]] (so from this point on the programmer may not change the variable's value.) In this third line ''i:i :: const[];'' sets the type of ''i'' to be that of ''i'' combined with the [[const]] type.\twolines{}
'''Important Rule''' - Changing the type will not have any runtime code generation in itself, although the modified semantics will affect how the variable behaves from that point on.
== Expressions ==
=== Syntax ===
name::type
=== Semantics ===
When used as an expression, a variable's current type can be coerced with additional types just for that expression.
=== Example ===
function void main() {
var i:Int :: allocated[multiple[]];
(i :: channel[1,2]):=82;
i:=12;
};
This code will declare ''i'' to be an [[Int|integer]], [[allocated]] on all processes. On line 2 ''i :: channel[1,2]'' will combine the [[channel]] type (primitive communication) just for that assignment and then on line 3 the assignment happens as a normal integer. This is because on line 2 we have not set the type of ''i'', just modified it for that assignment.
[[Category:Core Mesham]]
a7b716165dac3a58ff84bee985e129d3307d24d6
Currenttype
0
99
556
555
2019-04-15T15:44:39Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
currentype varname;
== Semantics ==
Will return the current type of the variable.<br><br>
''Note:'' If a variable is used within a type context then this is assumed to be shorthand for the current type of that variable<br>
''Note:'' This is a static construct and hence only available during compilation. It must be statically deducible and not used in a manner that is dynamic.
== Example ==
function void main() {
var i: Int;
var q:currentype i;
};
Will declare ''q'' to be an integer the same type as ''i''.
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
217b7e0a9ebf06a97b6b4383d196959d015c0cf6
Declaredtype
0
100
562
561
2019-04-15T15:44:39Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
declaredtype name
Where ''name'' is a variable name
== Semantics ==
Will return the declared type of the variable.<br><br>
''Note:'' This is a static construct only and its lifetime is limited to during compilation.
== Example ==
function void main() {
var i:Int;
i:i::const[];
i:declaredtype i;
};
This code example will firstly type ''i'' to be an [[Int]]. On line 2, the type of ''i'' is combined with the type [[const]] (enforcing read only access to the variable's data.) On line 3, the programmer is reverting the variable back to its declared type (i.e. so one can write to the data.)
''Since: Version 0.5''
[[Category:Sequential]]
[[Category:Types]]
d075683e34b2162a57ddbfff3aee30f3472f406c
Type Variables
0
101
567
566
2019-04-15T15:44:39Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
typevar name::=type;
name::=type;
Note how ''::='' is used rather than '':=''
''typevar'' is the type equivalent of ''var''
== Semantics ==
Type variables allow the programmer to assign types and type combinations to variables for use as normal program variables. These exist only statically (in compilation) and are not present in the runtime semantics.
== Example ==
function void main() {
typevar m::=Int :: allocated[multiple[]];
var f:m;
typevar q::=declaredtype f;
q::=m;
};
In the above code example, the type variable ''m'' has the type value ''Int :: allocated[multiple[]]'' assigned to it. On line 2, a new (program) variable is created using this new type variable. In line 3, the type variable ''q'' is declared and has the value of the declared type of program variable ''f''. Lastly in line 4, type variable ''q'' changes its value to become that of type variable ''m''. Although type variables can be thought of as the programmer creating new types, they can also be used like program variables in cases such as equality tests and assignment.
''Since: Version 0.5''
[[Category:Types]]
c18308550a08b9c0f21eccd7c4e097cba79cb6da
Category:Type Library
14
102
570
569
2019-04-15T15:44:39Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Maths Functions
14
103
573
572
2019-04-15T15:44:39Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
The functionality in this library is available by preprocessor including ''<maths>''
[[Category:Function Library]]
398a15e1bea4c1e5eb5a6422ee37a9a9033f6772
Category:IO Functions
14
104
576
575
2019-04-15T15:44:39Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<io>''
114f028dc298c3ce8c74bfc0096aaae25564a336
Category:Parallel Functions
14
105
579
578
2019-04-15T15:44:39Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<parallel>''
e3a19810ea868f3a545857d358b62aca2dd45d89
Category:String Functions
14
106
582
581
2019-04-15T15:44:40Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<string>''
8b69af5a50dbf837cefe04f7fcf466f3a50ddb76
Category:System Functions
14
107
585
584
2019-04-15T15:44:40Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
[[Category:Function Library]]
The functionality in this library is available by preprocessor including ''<system>''
71eac3e1c287cdd004d63d44ae0305abf1ba8bde
Cos
0
108
594
593
2019-04-15T15:44:40Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
== Overview ==
This cos(d) function will find the cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find cosine of
* '''Returns:''' A [[Double]] representing the cosine
== Example ==
#include <maths>
function void main() {
var a:=cos(10.4);
var y:=cos(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
2004d07102bd926cb9cc5206d040163454bf58e2
Floor
0
109
599
598
2019-04-15T15:44:40Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This floor(d) function will find the largest integer less than or equal to ''d''.
* '''Pass:''' A [[Double]] to find floor of
* '''Returns:''' An [[Int]] representing the floor
== Example ==
#include <maths>
function void main() {
var a:=floor(10.5);
var y:=floor(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
a1f40f5f8327abe46dfefea992816c1d2a3181cd
Getprime
0
110
604
603
2019-04-15T15:44:40Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This getprime(n) function will find the ''n''th prime number.
* '''Pass:''' An [[Int]]
* '''Returns:''' An [[Int]] representing the prime
== Example ==
#include <maths>
function void main() {
var a:=getprime(10);
var y:=getprime(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
72563debc31be6a39bdb903f7c4a797d537529b6
Log
0
111
611
610
2019-04-15T15:44:40Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the natural logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the logarithmic value
== Example ==
#include <maths>
function void main() {
var a:=log(10.54);
var y:=log(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
260900e77f6d0766001d0fccafbe7e21e636b685
Mod
0
112
616
615
2019-04-15T15:44:40Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This mod(n,x) function will divide ''n'' by ''x'' and return the remainder.
* '''Pass:''' Two integers
* '''Returns:''' An integer representing the remainder
== Example ==
#include <maths>
function void main() {
var a:=mod(7,2);
var y:=mod(a,a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
596158b28e9add95119049e4ee4a43f7810c9ad8
PI
0
113
622
621
2019-04-15T15:44:40Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This pi() function will return PI.
''Note: The number of significant figures of PI is implementation specific.''
* '''Pass:''' None
* '''Returns:''' A [[Double]] representing PI
== Example ==
#include <maths>
function void main() {
var a:=pi();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
5380d2d50eccb8ee2c895d308484ad6efade625a
Pow
0
114
628
627
2019-04-15T15:44:41Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This pow(n,x) function will return ''n'' to the power of ''x''.
* '''Pass:''' Two [[Int|Ints]]
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths>
function void main() {
var a:=pow(2,8);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
13f5fa88a084da5eab6479c3725c8117c3857d6a
Randomnumber
0
115
633
632
2019-04-15T15:44:41Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This randomnumber(n,x) function will return a random number between ''n'' and ''x''.
''Note: A whole number will be returned UNLESS you pass the bounds of 0,1 and in this case a floating point number is found.''
* '''Pass:''' Two [[Int|Ints]] defining the bounds of the random number
* '''Returns:''' A [[Double]] representing the random number
== Example ==
#include <maths>
function void main() {
var a:=randomnumber(10,20);
var b:=randomnumber(0,1);
};
In this case, ''a'' is a whole number between 10 and 20, whereas ''b'' is a decimal number.
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1bb2cbf3fac50d477f062d74f1ad04f2cc0c9141
Sqr
0
116
639
638
2019-04-15T15:44:41Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This sqr(d) function will return the result of squaring ''d''.
* '''Pass:''' A [[Double]] to square
* '''Returns:''' A [[Double]] representing the squared result
== Example ==
#include <maths.h>
function void main() {
var a:=sqr(3.45);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
f1106c368ec367c719727c32704259f8abc135b0
Sqrt
0
117
644
643
2019-04-15T15:44:41Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This sqrt(d) function will return the result of square rooting ''d''.
* '''Pass:''' An [[Double]] to find square root of
* '''Returns:''' A [[Double]] which is the square root
== Example ==
#include <maths>
function void main() {
var a:=sqrt(8.3);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1d3b50879f14cddf97f36f9892bb5b9df2d2874f
Input
0
118
650
649
2019-04-15T15:44:41Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This input(i) function will prompt the user for input via stdin, the result being placed into ''i''
* '''Pass:''' A variable for the input to be written into, of type [[String]]
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:String;
input(f);
print("You wrote: "+f+"\n");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
efadb447b7496688629c4a02ea7cc538c64e6296
Print
0
119
655
654
2019-04-15T15:44:41Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This print(n) function will write a variable of value ''n'' to stdout.
* '''Pass:''' A [[String]] typed variable or value
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:="Hello";
print(f+" world\n");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
c5d3ebfe96d7748fac20a332ed1cc95dba18bf95
Readchar
0
120
661
660
2019-04-15T15:44:41Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This readchar(f) function will read a character from a file with handle ''f''. The file handle maintains its position in the file, so after a call to read char the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read character from
* '''Returns:''' A character from the file type [[Char]]
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","r");
var u:=readchar(f);
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
29873925e47663b06fc6fe02d0542541ee129877
Readline
0
121
666
665
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This readline(f) function will read a line (delimited by the new line character) from a file with handle ''f''. The file handle maintains its position in the file, so after a call to readline the position pointer will be incremented.
* '''Pass:''' The [[File]] handle to read the line from
* '''Returns:''' A line of the file type [[String]]
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","r");
var u:=readline(f);
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
4ea7b88528ae2b22863940fba861bee7a2f1a1ff
Pid
0
122
671
670
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This pid() function will return the current processes' ID number.
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the current process ID
== Example ==
#include <parallel>
function void main() {
var a:=pid();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
4df3b22f261b1137c0967d25404e15b0a280f0c7
Processes
0
123
676
675
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This processes() function will return the number of processes
* '''Pass:''' Nothing
* '''Returns:''' An [[Int]] representing the number of processes
== Example ==
#include <parallel>
function void main() {
var a:=processes();
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Parallel Functions]]
2ac7efcb08254df1e32445bbd0313562793d405e
Charat
0
124
682
681
2019-04-15T15:44:42Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This charat(s,n) function will return the character at position ''n'' of the string ''s''.
* '''Pass:''' A [[String]] and [[Int]]
* '''Returns:''' A [[Char]]
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=charat(a,2);
var d:=charat("test",0);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
bd7a355f15778415e2fde11942f7a99ee90a8a5c
Lowercase
0
125
688
687
2019-04-15T15:44:42Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This lowercase(s) function will return the lower case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:="HeLlO";
var c:=lowercase(a);
var d:=lowercase("TeST");
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
c46b6bd33d89eca359411a0b7cb1d3d89fb71fa5
Strlen
0
126
693
692
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This strlen(s) function will return the length of string ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=strlen(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
a18daa62766c394f31f8f169be32f01ebe7ad013
Substring
0
127
698
697
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This substring(s,n,x) function will return the string at the position between ''n'' and ''x'' of ''s''.
* '''Pass:''' A [[String]] and two [[Int|Ints]]
* '''Returns:''' A [[String]] which is a subset of the string passed into it
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=substring(a,2,4);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
0863a8f9cac73fe5b61378d1e114209d19bb3861
Toint
0
128
703
702
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This toint(s) function will convert the string ''s'' into an integer.
* '''Pass:''' A [[String]]
* '''Returns:''' An [[Int]]
== Example ==
#include <string>
function void main() {
var a:="234";
var c:=toint(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
10b405c943ba3a1c59943f5ff7177c6824026e5f
Uppercase
0
129
708
707
2019-04-15T15:44:42Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This uppercase(s) function will return the upper case result of string or character ''s''.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:="HeLlO";
var c:=uppercase(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:String Functions]]
f4673a67eac2ecfaa17a6b02dc376dcad03dd3d2
Displaytime
0
130
712
711
2019-04-15T15:44:43Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This displaytime() function will display the timing results recorded by the function [[recordtime]] along with the process ID. This is very useful for debugging or performance testing.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
3f06a11df08b2266964a7ead9ded50acbd9a19d2
Recordtime
0
131
716
715
2019-04-15T15:44:43Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
This recordtime() function record the current (wall clock) execution time upon reaching that point. This is useful for debugging or performance testing, the time records can be displayed via the [[displaytime]] function.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
e9033859546f9291d8abe65b3a8d7e3700e0c825
Exit
0
132
720
719
2019-04-15T15:44:43Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This exit() function will cease program execution and return to the operating system. From an implementation point of view, this will return ''EXIT_SUCCESS'' to the OS.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:System Functions]]
0fbe682d48df22a1732cf87f79f50ad0c7d81945
Oscli
0
133
725
724
2019-04-15T15:44:43Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This oscli(a) function will pass the command line interface (e.g. Unix or MS DOS) command to the operating system for execution.
* '''Pass:''' A [[String]] representing the command
* '''Returns:''' Nothing
* '''Throws:''' The error string ''oscli'' if the operating system returns an error to this call
== Example ==
#include <io>
#include <system>
function void main() {
var a:String;
input(a);
try {
oscli(a);
} catch ("oscli") {
print("Error in executing command\n");
};
};
The above program is a simple interface, allowing the user to input a command and then passing this to the OS for execution. The ''oscli'' call is wrapped in a try-catch block which will detect when the user has request the run of an errornous command, this explicit error handling is entirely optional.
''Since: Version 0.5''
[[Category:Function Library]]
[[Category:System Functions]]
157bc855222f3afa62b1ecad06f38a0aff6c40b0
Category:Function Library
14
134
728
727
2019-04-15T15:44:43Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Mandelbrot
0
135
743
742
2019-04-15T15:44:43Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:mandle.gif|170px|right|Mandelbrot in Mesham]]
The mandlebrot example will compute the Mandlebrot set over any number of processes. This is a set of points in the complex plane, the boundary of which forms a fractal. The mathematics, which are quite simple, behind the Mandlebrot computation really do not matter for our purposes. The important issue is that firstly the calculation is embrasingly parallel (i.e. simple and natural to parallelise) and secondly will produce an image which the user can identify with.
The algorithm itself is actually quite simple, with a relatively large proportion of it dealing with specific colourisation of the resulting fractal. The example on this page is purposly basic so that the potential programmer can understand it.
<br style="clear: both" />
== Performance ==
[[Image:mandlezoom.jpg|400px|left|Mandelbrot Performance Evaluation, Mesham against C-MPI]]
The Mandelbrot example was evaluated against one written in C-MPI on a super computing cluster. Below is the graph detailing the performance of such codes, due to the close performance of the codes when run on an initial number of processors was the same and as such not shown. Due to the embarrassingly parallel nature of this problem the advantages of using Mesham in terms of performance do not start to stand out until a large number of processors is reached.
<br style="clear: both" />
== Source Code ==
#include <io>
#include <string>
typevar pixel::=record["r",Int,"g",Int,"b",Int];
var pnum:=16; // number of processes to run this on
var hxres:=512;
var hyres:=512;
var magnify:=1;
var itermax:=1000;
function Int iteratePixel(var hy:Float, var hx:Float) {
var cx:Double;
cx:=((((hx / hxres) - 0.5) / magnify) * 3) - 0.7;
var cy:Double;
cy:=(((hy / hyres) - 0.5) / magnify) * 3;
var x:Double;
var y:Double;
var iteration;
for iteration from 1 to itermax {
var xx:=((x * x) - (y * y)) + cx;
y:= ((2 * x) * y) + cy;
x:=xx;
if (((x * x) + (y * y)) > 100) {
return iteration;
};
};
return -1;
};
function void main() {
var mydata:array[pixel,hxres,hyres] :: allocated[single[on[0]]];
var p;
par p from 0 to pnum - 1 {
var tempd:array[record["r",Int,"g",Int,"b",Int], hyres];
var myStart:=p * (hyres / pnum);
var hy:Int;
for hy from myStart to (myStart + (hyres / pnum)) - 1 {
var hx;
for hx from 0 to hxres - 1 {
var iteration := iteratePixel(hy, hx);
tempd[hx]:=determinePixelColour(iteration);
};
mydata[hy]:=tempd;
sync mydata;
};
};
proc 0 {
createImageFile("picture.ppm", mydata);
};
};
function pixel determinePixelColour(var iteration:Int) {
var singlePixel:pixel;
if (iteration > -1) {
singlePixel.b:=(iteration * 10) + 100;
singlePixel.r:=(iteration * 3) + 50;
singlePixel.g:=(iteration * 3)+ 50;
if (iteration > 25) {
singlePixel.b:=0;
singlePixel.r:=(iteration * 10);
singlePixel.g:=(iteration * 5);
};
if (singlePixel.b > 255) singlePixel.b:=255;
if (singlePixel.r > 255) singlePixel.r:=255;
if (singlePixel.g > 255) singlePixel.g:=255;
} else {
singlePixel.r:=0;
singlePixel.g:=0;
singlePixel.b:=0;
};
return singlePixel;
};
function void createImageFile(var name:String, var mydata:array[pixel,hxres,hyres]) {
var file:=open(name,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(hyres));
writestring(file," ");
writestring(file,itostring(hxres));
writestring(file,"\n255\n");
// now write data into the file
var j;
for j from 0 to hyres - 1 {
var i;
for i from 0 to hxres - 1 {
writebinary(file,mydata[j][i].r);
writebinary(file,mydata[j][i].g);
writebinary(file,mydata[j][i].b);
};
};
close(file);
};
''This code is compatible with Mesham version 1.0 and later''
== Notes ==
To change the number of processes, edit ''pnum''. In order to change the size of the image edit hxres and hyres. The mandlebrot set will calculate up until itermax for each point, by increasing this value you will get a crisper image (but it will take much more time!) Lastly, the variable ''magnify'' will specify the magnification of the image - the value of 1 will generate the whole image and by increasing this image the computation is directed into working on a specific area in more detail.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the Mandelbrot example [http://www.mesham.com/downloads/mandle.mesh here] or a legacy Mesham 0.5 version [http://www.mesham.com/downloads/mandle-0.5.mesh here]
[[Category:Example Codes]]
108c9b66d317c6c409e982d68a71e2867000d236
File:Mandle.gif
6
136
745
744
2019-04-15T15:44:43Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Mandelbrot example written in Mesham
96c49786466d38afa546f88100b6dd44fa0e0380
Prefix sums
0
137
754
753
2019-04-15T15:44:44Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
Prefix sums is a very simple, basic parallel algorithm commonly used as the building block of many applications. Also known as a scan, each process will sumate their value with every preceding processes' value. For instance, p=0 returns its value, p=1 returns p=1 + p=0 values, p=2 returns p=2 + p=1 + p=0 values. The MPI reduce command often implements the communication via the logarithmic structure shown below.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var processes:=10;
function void main(var argc:Int,var argv:array[String]) {
var a:Int :: allocated[multiple[]];
var p;
par p from 0 to processes - 1 {
var mine:Int; // Force to be an integer as randomnumber function defaults to double
mine:= randomnumber(0,toint(argv[1]));
var i;
for i from 0 to processes - 1 {
var myvalue:=mine;
if (i < p) myvalue:=0;
(a :: reduce[i, "sum"]):=myvalue;
};
print(itostring(p)+" "+itostring(mine)+" = "+itostring(a)+"\n");
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
The user can provide, via command line options, the range of the random number to find. The (relative) complexity of the prefix sums is taken away by using the reduce primitive communication type.
== Download ==
Download the entire prefix sums source code [http://www.mesham.com/downloads/prefix.mesh here] you can also download a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/prefix-0.5.mesh here]
[[Category:Example Codes]]
92a317726e47048ea81784c9c08ae0d23505b15f
File:Dartboard.jpg
6
138
756
755
2019-04-15T15:44:44Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Dartboard
b560bd391a0504dee677d480d1ea12753fef21e9
Dartboard PI
0
139
765
764
2019-04-15T15:44:44Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
[[Image:dartboard.jpg|thumb|260px|right|Dartboard method to find PI]]
The dartboard method to find PI is a simple algorithm to find the value of PI. At this point it must be noted that there are much better methods out there to find PI, however the dartboard method is embarasingly parallel and as such quite simple to parallelise. The basic premise is that you can throw n darts randomly at a round dartboard on a square backing. As each dart is thrown randomly the ratio of darts hitting the board to those landing on the square is equal to the ratio between the two areas - which is PI / 4. Of course, the more darts you simulate throwing at the board, the better the approximation of PI - in our example each process will perform this simulated throwing a number of times, and then each process's approximation of PI is combined and averaged by one of the processes to obtain the result. Very ruffly, this means that with d darts, thrown over r rounds on n processes, the time taken parallely is the time it takes to simulate throwing d * r darts, yet a sequential algorithm would need to simulate throwing d * r * n darts. (We have excluded the consideration of communication costs from the parallel situation to simplify the concept.) Hopefully quite obviously, in the example by changing the number of processes, the number of rounds and the number of darts to throw in each round will directly change the accuracy of the result.
== Source Code ==
#include <maths>
#include <io>
#include <string>
var m:=64; // number of processes
function void main() {
var calculatedPi:array[Double,m]:: allocated[single[on[0]]];
var mypi:Double;
var p;
par p from 0 to m - 1 {
var darts:=10000; // number of darts to simulate throwing each round
var rounds:=100; // number of rounds of darts to throw
var i;
for i from 0 to rounds - 1 {
mypi:=mypi + (4.0 * (throwdarts(darts) / darts));
};
mypi:=mypi / rounds;
calculatedPi[p]:=mypi;
};
sync;
proc 0 {
var avepi:Double;
var i;
for i from 0 to m - 1 {
avepi:=avepi + calculatedPi[i];
};
avepi:=avepi / m;
print(dtostring(avepi, "%.2f")+"\n");
};
};
function Double throwdarts(var darts:Int)
{
var score:Double;
var n:=0;
for n from 0 to darts - 1 {
var xcoord:=randomnumber(0,1);
var ycoord:=randomnumber(0,1);
if ((pow(xcoord,2) + pow(ycoord,2)) < 1.0) {
score++; // hit the dartboard!
};
};
return score;
};
''This code requires at least Mesham version 1.0''
== Download ==
The dartboard method to compute PI source code is located [http://www.mesham.com/downloads/pi.mesh here] a legacy version for Mesham 0.5 can be downloaded [http://www.mesham.com/downloads/pi-0.5.mesh here]
[[Category:Example Codes]]
db223d6445a217c0d2bcba772e49bfc65e7481f2
Prime factorization
0
140
772
771
2019-04-15T15:44:44Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example will perform prime factorization on a number parallely, to return the prime factors which make up that number. The example uses the primitive communication, all reduce. There are actually a number of ways such a result can be obtained - this example is a simple parallel algorithm for this job.
== Source Code ==
#include <io>
#include <string>
#include <maths>
var n:=976; // this is the number to factorize
var m:=10; // number of processes to use
var s:Int :: allocated[multiple[]];
function void main() {
var p;
par p from 0 to m - 1 {
var k:=p;
var divisor;
var quotient;
while (n > 1) {
divisor:= getprime(k);
quotient:= n / divisor;
var remainder:= n % divisor;
if (remainder == 0) {
n:=quotient;
} else {
k:=k + m;
};
s :: allreduce["min"]:=n;
if ((s==n) && (quotient==n)) {
print(itostring(divisor)+"\n");
};
n:=s;
};
};
};
''This code requires at least Mesham version 1.0''
== Notes ==
Note how we have typed the quotient to be an integer - this means that the division n % divisor will throw away the remainder. Also, for the assignment s:=n, we have typed s to be an allreduce communication primitive (resulting in the MPI all reduce command.) However, later on we use s as a normal variable in the assignment n:=s due to the typing for the previous assignment being temporary.
As an exercise, the example could be extended so that the user provides the number either by command line arguments or via program input.
== Download ==
You can download the prime factorization source code [http://www.mesham.com/downloads/fact.mesh here] and a legacy version for Mesham 0.5 is also available [http://www.mesham.com/downloads/fact-0.5.mesh here]
[[Category:Example Codes]]
b73939d85a0373ef688ae63897e7a1035613cd1d
File:Imagep.jpg
6
141
774
773
2019-04-15T15:44:44Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Example of high and low pass filters operating on an image
44ca822d7d041388db2e0768c033edc01be7d571
Image processing
0
142
791
790
2019-04-15T15:44:45Z
Polas
1
16 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
This example is one of the more complex examples we have written in the language. It allows the user to perform some parallel image processing on a black and white image. The image processing supported is applying a low or high pass filter on the image. However, to do this the image needs to be transformed into the frequency domain - and then requires transformation back into the time domain. At the core of the example is the FFT kernel, this is a basic Cooley Turkey FFT algorithm and there are more efficient ones out there. Having said that, the type information provided by the programmer allows the compiler to perform a large amount of optimisation during the translation process. By playing around you can change the filters and also invoke the high pass filter rather than the lowpass which the code does at the moment.
<center> [[Image:imagep.jpg|Image processing using filters in Mesham]] </center>
== Performance ==
Performance of the Fast Fourier Transform (FFT) has been evaluated on a super computer cluster. Two different experiments were performed, one with an image size of 128MB and the other with an image size of 2GB. Evaluations were performed against the Fastest Fourier Transformation in the West (FFTW) and a book example for 128MB. As can be seen with an uneven distribution of data (10 and 20 processors) FFTW experiences severe slowdowns whereas the Mesham version does not (the compiler will optimise the code in this case to avoid any slow down.)
[[Image:128.jpg|500px|left|Fast Fourier Transformation with 128MB of data]]
[[Image:2gb.jpg|500px|right|Fast Fourier Transformation with 2GB of data]]
<br style="clear: both" />
== Source Code ==
#include <maths>
#include <io>
#include <string>
var n:=256; // image size
var m:=4; // number of processors
var filterThreshold:=10; // filtering threshold for high and low pass filters
function void main() {
var a:array[complex,n,n] :: allocated[single[on[0]]];
var s:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist]];
var s2:array[complex,n,n] :: allocated[horizontal[m] :: col[] :: single[evendist]];
var s3:array[complex,n,n] :: allocated[horizontal[m] :: single[evendist] :: share[s2]];
proc 0 {
loadfile("data/clown.ppm",a);
moveorigin(a);
};
s:=a;
var sinusiods:=computesin();
var p;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
filter(a);
invert(a);
};
s:=a;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s[p][i-s[p].low],sinusiods);
};
};
s2:=s;
par p from 0 to m - 1 {
var i;
for i from s[p].low to s[p].high {
FFT(s3[p][i-s[p].low],sinusiods);
};
};
a:=s3;
proc 0 {
moveorigin(a);
descale(a);
writefile("newclown.ppm", a);
};
};
function array[complex] computesin() {
var elements:= n/2;
var sinusoid:array[complex, elements];
var j;
for j from 0 to (n / 2) - 1 {
var topass:Float;
topass:=((2 * pi() * j) / n);
sinusoid[j].i:=-sin(topass);
sinusoid[j].r:=cos(topass);
};
return sinusoid;
};
function Int getLogn() {
var logn:=0;
var nx:=n;
nx := nx >> 1;
while (nx >0) {
logn++;
nx := nx >> 1;
};
return logn;
};
function void moveorigin(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * pow(-1,(i + j));
data[i][j].i:=data[i][j].i * pow(-1,(i + j));
};
};
};
function void descale(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r / (n * n) ;
data[i][j].i:=-(data[i][j].i / (n * n));
};
};
};
function void invert(var data : array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].i:=-data[i][j].i;
};
};
};
function void FFT(var data : array[complex,n], var sinusoid:array[complex]) {
var i2:=getLogn();
bitreverse(data); // data decomposition
var f0:Double;
var f1:Double;
var increvec;
for increvec from 2 to n {
i2:=i2 - 1;
var i0;
for i0 from 0 to ((increvec / 2) - 1) {
// below computes the sinusoid for this spectra
var i1;
for i1 from 0 to n - 1 {
// do butterfly for each point in the spectra
f0:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].r)- (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].i);
f1:=(data[i0 + i1 + (increvec / 2)].r * sinusoid[i0 << i2].i)+ (data[i0 + i1 + (increvec / 2)].i * sinusoid[i0 << i2].r);
data[i0 + i1 + (increvec / 2)].r:= data[i0 + i1].r- f0;
data[i0 + i1 + (increvec / 2)].i:=data[i0 + i1].i - f1;
data[i0 + i1].r := data[i0 + i1].r + f0;
data[i0 + i1].i := data[i0 + i1].i + f1;
i1:=(i1 + increvec) - 1;
};
};
increvec:=(increvec * 2) - 1;
};
};
function void loadfile(var name:String,var data:array[complex,n,n]) {
var file:=open(name,"r");
readline(file);
readline(file);
readline(file);
readline(file);
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
var red:=readchar(file);
readchar(file);readchar(file);
data[i][j].r:=red;
data[i][j].i:=red;
};
};
close(file);
};
function void writefile(var thename:String, var data:array[complex,n,n]) {
var file:=open(thename,"w");
writestring(file,"P6\n# CREATOR: LOGS Program\n");
writestring(file,itostring(n));
writestring(file," ");
writestring(file,itostring(n));
writestring(file,"\n255\n");
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
writebinary(file,data[i][j].r);
};
};
close(file);
};
function Int lowpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) < filterThreshold) return 1;
return 0;
};
function Int highpass(var i:Int, var j:Int) {
var val:=sqr(i) + sqr(j);
if (sqrt(val) > (255-filterThreshold)) return 1;
return 0;
};
function void filter(var data: array[complex,n,n]) {
var i;
for i from 0 to n - 1 {
var j;
for j from 0 to n - 1 {
data[i][j].r:=data[i][j].r * lowpass(i,j) * highpass(i,j);
data[i][j].i:=data[i][j].i * lowpass(i,j) * highpass(i,j);
};
};
};
function void bitreverse(var a:array[complex,n]) {
var j:=0;
var k:Int;
var i;
for i from 0 to n-2 {
if (i < j) {
var swap_temp:Double;
swap_temp:=a[j].r;
a[j].r:=a[i].r;
a[i].r:=swap_temp;
swap_temp:=a[j].i;
a[j].i:=a[i].i;
a[i].i:=swap_temp;
};
k := n >> 1;
while (k <= j) {
j := j - k;
k := k >>1;
};
j := j + k;
};
};
''This version requires at least Mesham version 1.0''
== Notes ==
The algorithm is relatively simple, one drawback is that, even though the transpositions are distributed, after the first 2d FFT all the data must be sent back to process 0 for filtering, and then the data is redistributed. It would improve runtime if we could filter the data without having to collect it all on a central one - this would be an interesting improvement to make to the algorithm.
'''Note:''' This example will produce an image in the Portable PixMap format (PPM), viewers of these on Unix based systems are easy to come by (i.e. eye of gnome) but on Windows are slightly more difficult. Windows users might want to rewrite some of the last bit on process 0 so that a BitMaP is created.
== Download ==
You can download the entire Image processing package [http://www.mesham.com/downloads/fftimage.zip here] there is also a legacy version for Mesham 0.5 [http://www.mesham.com/downloads/fftimage-0.5.zip here]
There is also a simplified FFT code available [http://www.mesham.com/downloads/fft.mesh here] which the imaging processing was based upon and a version which can be run with any number of processes decided at runtime [http://www.mesham.com/downloads/fft-dynamic.mesh here].
[[Category:Example Codes]]
6a4c11dadc22fcc76c9e6413b31fa9a0826c12eb
Procedures
0
143
793
792
2019-04-15T15:44:45Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Functions]]
7a7b5cb084fd2aa6ee3ba6b684ea45d8d1eea795
NAS-IS Benchmark
0
144
808
807
2019-04-15T15:44:45Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
NASA's Parallel Benchmarks (NPBs) are a common, convenient way of evaluating performance of different classes of machine. By using the official NASA implementation it is possible to evaluate Mesham against that of existing languages. To date the evaluation has been done against NASA's C-MPI code, which is the most common, and arguable most efficient, language choice in parallel computing.
There are numerous benchmarks in the NPB suite, to date we have implemented benchmark IS (Integer Sort), which will sort numbers, in parallel, using a modified version of the bucket sort algorithm. This benchmark has five classes associated with it - class S with 65,000 numbers, class W with 1,000,000 numbers, class A with 8,000,000 numbers, class B with 33,000,000 numbers and lastly class C with 340,000,000 numbers. We have performed evaluation using class B and C, which involves sorting the greatest amount of numbers and hence the largest challenge, although all classes are supported by the benchmark code.
The benchmark has been tuned for performance, this does mean that some of the more low level primitive communication types have been used and hence is not as easily readable as many of the other examples. Having said that, it has not taken long to write and is easy modifiable if required.
== Performance Results ==
Performance tests were done using a super computer cluster, testing the Mesham code against existing NASA C-MPI parallel code, both of which have been tuned for performance.
[[Image:classc.jpg|400px|right|NASA's Parallel Benchmark IS class C]]
[[Image:classb.jpg|400px|left|NASA's Parallel Benchmark IS class B]]
[[Image:total.jpg|400px|left|NASA's Parallel Benchmark IS Total Million Operations per Second]]
[[Image:process.jpg|400px|right|NASA's Parallel Benchmark IS Million Operations per Second per Process]]
<br style="clear: both" />
== Source Code ==
The source code is more extensive than other examples, with combination files for each class of experiment. It is therefore not included on this page but you can download it.
== Download ==
You can download the entire code package for the current version of the compiler [http://www.mesham.com/downloads/npb.zip here] and for the older 0.5 version [http://www.mesham.com/downloads/npb.tar.gz here]
[[Category:Example Codes]]
bd6d11c5a0f94b07a6c915928f8ac34d5d449811
Download rtl 0.1
0
145
816
815
2019-04-15T15:44:45Z
Polas
1
7 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Runtime library 0.1|author=[[User:polas|Nick Brown]]|desc=The runtime library required for Mesham 0.41b.|url=http://www.mesham.com|image=Runtimelibrary.png|version=0.1|released=September 2008}}
''Please Note: This version of the runtime library is deprecated but required for [[Download_0.41_beta|Mesham 0.41b]]''
== Runtime Library Version 0.1 ==
This is the Mesham Runtime Library Version 0.1 and the last version to provide explicit support for Windows Operating Systems. This version of the runtime library is ONLY compatible with Mesham 0.41(b), it will not work with Mesham 0.5.
== Download ==
You can download version 0.1 of the [http://www.mesham.com/downloads/libraries01.zip Runtime Library here] ''(Source cross platform compatible.)''
You can download version 0.1 of the [http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library here] ''(Binary for Windows 32 bit.)''
== Instructions for Use ==
Please refer to the [[Download_all|All version 0.41(b)]] page for detailed installation instructions. The target machine will require a C99 conforming compiler and an implementation of the MPI 2 standard (such as MPICH or OpenMPI.)
884aa7f2bd384f8262f055306485a2d4b15c630f
Parallel Computing
0
146
822
821
2019-04-15T15:44:46Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Parallel Computing ==
Parallel computing is the use of multiple computing resources to solve a problem. These problems can be very wide ranging, from smaller examples to highly complex cosmological simulations or weather prediction codes. Utilising parallel computing adds additional complexities and challenges to programming. The programmer must consider a wide variety of new concepts and change their mindset from sequential to parallel. Having said that, the world we live in is predominantly parallel and as such it is natural to model problems in this way.
== The Problem ==
Current parallel languages are either conceptually simple or efficient - but not both. These aims have, until this point, been contradictory. If parallel computing is to grow (as we predict with current advances in CPU and GPU technology) then this issue must be addressed. The problem is that we are using current, sequential, ways of thinking to try and solve this programmability problem... instead we need to think "out the box" and come up with a completely new solution.
== Current Solutions ==
There are numerous parallel language solutions currently in existance, we will consider just a few:
=== Message Passing Interface ===
The MPI standard is extremly popular within this domain. Although bindings exist for many languages, most commonly it is used with C. The result is low level, highly complex, difficult to maintain BUT efficient code. As the programmer must control all aspects of parallelism they can often get caught up in the low level details which are uninteresting but important. Additionally the programmer is completely responsible for ensuring all communications will complete correctly, or else they run the risk of deadlock, livelock etc...
=== Bulk Synchronous Parallel ===
The BSP standard was once touted as the solution to parallel computing. Implementations of this standard are most commonly used in conjuction with C. The program is split into supersteps, each superstep is split into 3 stages - computation, communication and global synchronisation via barriers. However, this synchronisation is very expensive and as such performance of BSP is generally much poorer than that of MPI. In conjuctional, although the communication model adopted by BSP is simpler the programmer must still address low level issues (such as pointers) imposed by the underlying language used.
=== High Performance Fortran ===
In HPF the programmer just specifies the general distribution of data, with the compiler taking care of all other aspects of parallelism (such as computation distribution and communication.) Although a simple, abstract language, because much emphasis is placed upon the compiler to deduce parallelism efficiency suffers. The programmer, who is often in a far better position is indicate parallel aspects, lacks control and is limited. One useful feature of HPF is that all parallel aspects are expressed via comments, such that the HPF program is also acceptable to a normal Fortran Compiler
=== Co-Array Fortran ===
This language is more explicit than HPF. The programmer, via co-arrays will distribute computation and data but much rely on the compiler to determine communication (which is often one sided.) Because of this one sided communication, messages are often short which results in the overhead of sending many different messages. Having said this, things are improving with reference to CAF, the new upcomming Fortran standard is said to include co-arrays which will see the integration of the CAF concepts into the standard Fortran.
=== Unified Parallel C ===
UPC is certainly a parallel language to keep an eye on - there is much development time and effort being spent on it at the moment. UPC uses an explicit parallel execution model with shared address space. There are memory management primitives added into the language and shared memory keywords and pointers. In adding all these keywords to the language does bloat it and result in a brittle tightly coupled design. Additionally C's array model is also inherited, which is limiting in data intensive parallel computations. One must still deal with pointers and the low level challenges that these impose.
=== Titanium ===
This is an explicit parallel version of Java it is safe, portable and allows one to build complex data structures. Similar to UPC it uses a global address space with numerous keywords and constructs added to the language to support parallelism. However, OO has an imposed (hidden) cost in terms of serialising and deserialising objects. There is also literature which indicates that the JRE does not consider memory locality, which is important for performance in HPC applications working on large data sets.
=== ZPL ===
ZPL is an array programming language. The authors of this language have deduced that a large majority of parallel programming is done with respect to arrays of data. To this end they have created a language with specific keywords and constructs to assist in this. For instance the expression ''A=B*C'' will set array ''A'' to be that of arrays ''B'' and ''C'' multiplied. Whilst this is a useful abstraction unfortunately parallelism itself is implicit with limited control on behalf of the programmer. The net result is that much emphasis is placed upon the compiler to find the best solution and, with limited information, performance is adversely affected. Incidently, in Mesham the types have been written such that a concept such as array programming can be easily included. The same expression is perfectly acceptable to Mesham, with the complexity of the operation being handled in the type library.
=== NESL ===
NESL is a functional parallel language. Numerous people believe that functional programming is the answer to the problem of parallel languages. However, the programmer is so abstract from the actual machine it is not possible to optimise their code (they are completely reliant on the compiler's efficiency) nore is it often possible to map directly the runtime cost of an algorithm (although it is often possible to map this theoretically.) This high level of abstract means that it is difficult, in some cases impossible, for the NESL programmer to elicit high performance with current compiler technology. There also the, sometimes misguided, belief amongst programmers that functional languages are difficult to learn. Whilst this is not always the case it does put many programmer off, especially when the performance benefits of learning NESL are mediocre at best.
f5d1f2a61b7e6ec48511765e0978831650b65993
File:Pram.gif
6
147
824
823
2019-04-15T15:44:46Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Parallel Random Access Machine
b7936ec07dfd143609eabc6862a0c7fa0f6b8b17
File:Messagepassing.gif
6
148
826
825
2019-04-15T15:44:46Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Message Passing based communication
78f5d58106e6dcbc6620f6143e649e393e3eae10
Communication
0
149
831
830
2019-04-15T15:44:46Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Communication ==
Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.
== Shared Memory ==
In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.
=== PRAM ===
The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.
<center>[[Image:pram.gif|A Parallel Random Access Machine]]</center>
Incidentally, you can download a PRAM simulator (and very simple programming language) for it [http://www.mesham.com/downloads/Gui.zip here] (PRAM Simulator) and [http://www.mesham.com/downloads/apl.zip here] (very simple language.) This simulator, written in Java, implements a parallel version of the MIPS architecture. The simple language for it (APL) is cross compiled using GNU's cross assembler.
=== BSP ===
Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.
=== Logic of Global Synchrony ===
Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.
=== Advantages ===
* Relatively Simple
* Convenient
=== Disadvantages ===
* Poor Performance
* Not Scalable
== Message Passing ==
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.
<center>[[Image:messagepassing.gif|Message Passing Communication Architecture]]</center>
=== Advantages ===
* Good Performance
* Scalable
=== Disadvantages ===
* Difficult to program and maintain
155dd82514b07e687083967185f5b03adaabcc62
File:Bell.gif
6
150
833
832
2019-04-15T15:44:46Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
File:Bell.jpg
6
151
835
834
2019-04-15T15:44:46Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
Computation
0
152
838
837
2019-04-15T15:44:46Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Flynn's Taxonomy ==
This is an important classification of computer architectures proposed in the 1960s. It is important to match the appropriate computation model to the problem being solved. The two main classifications are shown below, although many languages allow the programmer to mix these classifications and Mesham is no different.
=== Single Program Multiple Data ===
In SPMD, each process executes the same program with its own data. The benefit of SPMD is that only one set of code need be written for all processors, although this can be bloated and lacks support for optimising specific parts for specific architectures.
=== Multiple Program Multiple Data ===
In MPMD each process executes its own program and its own data. The benefit of MPMD is that it is possible to tailor the code to run efficiently on each processor and also keeps the code each processor will execute relatively small, however writing code for each processor in a large system is not practical.
== The Design of Parallelism ==
In designing how your parallel program will exploit the advantages of parallelism there are two main ways in which the parallel aspects can be designed. The actual problem type depends on which form of parallelism is to be employed.
=== Data Parallelism ===
In data parallelism each processor will execute the same instructions, but work on different data sets. For instance, with matrix multiplication, one processor may work on one section of the matrices whilst other processors work on other sections, solving the problem parallelly. As a generalisation data parallelism, which often requires an intimate knowledge of the data and explicit parallel programming, usually results in better results.
=== Task Parallelism ===
In task parallelism the program is divided up into tasks, each of which is sent to a unique processor to solve at the same time. Commonly, task parallelism can be thought of when processors execute distinct threads, or processes, and at the time of writing it is the popular way in which operating systems will take advantage of multicore processors. Task parallelism is often easier to perform but less effective than data parallelism.
== Problem Classification ==
When considering both the advantages of and how to parallelise a problem, it is important to appreciate how the problem should be decomposed across multiple processors. There are two extremes of problem classification -embarrassingly parallel problems and tightly coupled problems.
=== Embarrassingly Parallel ===
Embarrassingly parallel problems are those which require very little or no work to separate them into a parallel form and often there will exist no dependenciess or communication between the processors. There are numerous examples of embarrassingly parallel problems, many of which exist in the graphics world which is the reason why the employment of many core GPUs has become a popular performance boosting choice.
=== Tightly Coupled Problems ===
The other extreme is that of tightly coupled problems, where it is very difficult to parallelise the problem and, if achieved, will result in many dependencies between processors. In reality most problems sit somewhere between these two extremes.
== Law of Diminishing Returns ==
There is a common misconception that "throwing" processors at a problem will automatically increase performance regardless of the number of processors or the problem type. This is simply not true because compared with computation, communication is a very expensive operation. There is an optimum number of processors, after which the cost of communication outweighs the saving in computation made by adding an extra processor and the performance drops. The figure below illustrates a performance vs processors graph for a typical problem. As the number of processors are increased, firstly performance improves, however, after reaching an optimum point performance will then drop off. It is not uncommon in practice for the performance on far too many processors to be very much worse than it was on one single processor!
<center>[[Image:bell.jpg|As the number of processors goes too high performance will drop]]</center>
In theory a truly embarrassingly parallel problem (with no communication between processors) will not be subject to this rule, and it will be more and more apparent as the problem type approaches that of a tightly coupled problem. The problem type, although a major consideration, is not the only factor at play in shaping the performance curve - other issues include the types of processors, connection latency and workload of the parallel cluster will cause variations to this common bell curve.
e332a1953b0d7c21c48e8dcd73c7bfb0043f97ed
Type Oriented Programming Concept
0
153
842
841
2019-04-15T15:44:46Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Type Oriented Programming ==
Much work has been done investigating programming paradigms. Common paradigms include imperative, functional, object oriented and aspect oriented. However, we have developed the idea of type oriented programming. Taking the familiar concept of a type we have associated in depth runtime semantics with such, so that the behaviour of variable usage (i.e. access and assignment) can be determined by analysing the specific type. In many languages there is the requirement to combine a number of attributes with a variable, to this end we allow for the programmer to combine types together to form a supertype (type chain.)
== Type Chains ==
A type chain is a collection of types, combined together by the programmer. It is this type chain that will determine the behaviour of a specific variable. Precidence in the type chain is from right to left (i.e. the last added type will override behaviour of previously added types.) This precidence allows for the programmer to add additonal information, either perminantly or for a specific expression, as the code progresses.
== Type Variables ==
Type variables are an interesting concept. Similar to normal program variables they are declared to hold a type chain. Throughout program execution they can be dealt with like normal program variables and can be checked via conditionals, assigned and modified.
== Advantages of the Approach ==
There are a number of advantages to type oriented programming
* Efficiency - The rich amount of information allows the compiler to perform much static analysis and optimisation resulting in increased efficiency.
* Simplicity - By providing a clean type library the programmer can use well documented types to control many aspects of their code.
* Simpler language - By taking the majority of the language complexity away and placing it into a loosely coupled type library, the language is simplier from a design and implementation (compiler's) point of view. Adding numerous language keywords often results in a brittle design, using type oriented programming this is avoided
* Maintainability - By changing the type one can have considerable effect on the semantics of code, by abstracting the programmer away this makes the code simpler, more flexible and easier to maintain.
== Why use it in HPC ==
Current parallel languages all suffer from the simplicity vs efficiency compromise. By abstracting the programmer away from the low level details gives them a simple to use language, yet the high level of information provided to the compiler allows for much analysis to be performed during the compilation phase. From low level languages (such as C) it is difficult for the compiler to understand how the programmer is using parallelism, hence the optimisation of such code is limited.
We provide the programmer with the choice between explicit and implicit programming - they can rely on the inbuild, safe, language defaults or alternatively use additional types to elicit more control (and performance.) Therefore the language is acceptable to both the novice and expert parallel programmer.
== Other uses ==
* GUI Programming - GUI programming can be quite tiresome and repetative (hence the use of graphical design IDEs.) By using types this would abstract the programmer from many of the repetative issues.
* Retrofit Existing Languages - The type approach could be applied to existing languages where a retrofit could be undertaken, keeping the programmer in their comfort zone but also giving them the power of type oriented programming.
* Numerous Type Systems - The type system is completely separate from the actual language, it would be possible to provide a number of type systems for a single language, such as a ''parallel'' system, a ''sequential'' system etc...
e0093013db289d3ab010fb2fb105838a02beb26b
Extendable Types
0
154
847
846
2019-04-15T15:44:46Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
A major idea for extension is to allow the programmer to create their own language types. In the current version of the language the programmer can only create new types at the compiler level, this is not a major issue at the moment due to generality of the type library however it does limit the language somewhat. Whilst it is relatively simple to create new types in this way, one can not expect the programmer to have to modify the compiler in order to support the codes they wish to write. There are a number of issues to consider however in relation to this aim.
* How to implement this efficiently?
* How to maximise static analysis and optimisation?
* How to minimise memory footprint?
* The ideal way of structuring the programming interface?
----
We have currently adopted a middle ground within the [[Oubliette]] compiler line in as much as additional types may be provided as third party plugins which the compiler will identify with and allow the programmer to use freely. There is the optional support for these third party types to provide additional runtime library services too. Whilst this is a reasonable interim step, the end goal is still to allow for programmers to specify types within their own Mesham source code.
8d24e837d44b9a608f757297c08168a48b7040b7
General Additions
0
155
852
851
2019-04-15T15:44:47Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Accepted Additions ==
# [[Extendable Types]] - 0%
# Structure IO types - 0%
# Additional distribution types - 30%
# Group keyword - 100%
== Wish List ==
Please add here any features you would like to see in the upcomming development of Mesham
97a88d2fe5e38eab0a9c2fbf41c903290196bb3a
Extentable Types
0
156
854
853
2019-04-15T15:44:47Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Extendable Types]]
3b199f3fd3cfdb26ed0551cf6bc5565500055b0d
New Compiler
0
157
859
858
2019-04-15T15:44:47Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
''Completed March 2013''
We have completely rewritten the Mesham compiler from the [[Arjuna]] line (up to version 0.5) and created the [[Oubliette]] line (from version 1.0 onwards.) Further details about these compilers can be found on their respective pages. The previous [[Arjuna]] line are deprecated.
----
''The following is a statement of intent that we wrote when deciding to rewrite the compiler''
The current Mesham compiler is mainly written in FlexibO, using Java to preprocess the source code. Whilst this combination is flexible it is not particularly efficient in the compilation phase. To this end we are looking to reimplement the compiler in C. This reimplementation will allow us to combine all aspects of the compiler in one package, remove depreciated implementation code, clean up aspects of the compilation process, fix compiler bugs and provide a structured framework from which types can fit in.
Like previous versions of the compiler, the results will be completely portable.
This page will be updated with news and developments in relation to this new compiler implementation.
c0eb3a0b4e46394b36d70cbcd32af3b802beb6a3
Download 0.5
0
158
868
867
2019-04-15T15:44:47Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Mesham 0.5|author=[[User:polas|Nick Brown]]|desc=The latest release from the Arjuna compiler line. Based upon FlexibO this version is deprecated but still contains some useful types.|url=http://www.mesham.com|image=mesham.gif|version=0.5|released=January 2010}}
''Please Note: This version of Mesham is deprecated, the documentation and examples on this website are no longer compatible with this version.''
== Version 0.5 ==
Version 0.5 of Mesham is currently the latest version of the language from the [[Arjuna]] line and contains numerous additions and improvements over 0.41(b). However this version of the compiler does not explicitly support Windows (mainly in the runtime library) although it is possible to compile on Windows for more experienced developers.
== Download ==
You can download [http://www.mesham.com/downloads/mesham5.tar.gz Mesham 0.5 here] (700KB)
== Installation Instructions ==
There are three basic components required for installing Mesham - installing the client, the server and the runtime library
* Install Java RTE from java.sun.com
* Make sure you have a C compiler installed i.e. gcc
* Install an implementation of MPI - MPICH (version 2) and OpenMPI are both good ones, you choose
* The three different components must be configured to your machine and where they are situated, happily this is all automated in the installlinux script.
Open a terminal and cd into your Mesham directory - i.e. cd /home/work/mesham
Then issue the command ./installlinux and follow the on screen prompts.
If there is an issue with running the command, use the command chmod +x installlinux and then try running it again.
After running the install script, the library, compiler and server should not be moved from where they are now - this will cause problems and if required you must rerun the script and remake them.
* Now type make all
* If you have root access, login as root and type make install
* Now type make clean (to clean up the directory)
Congratulations! If you have completed these 7 steps you have installed the Mesham language onto your computer!
== Using the Compiler ==
Assuming you have installed the language you will now want to start writing some code! Firstly you will need to start the Mesham translation server, cd into your mesham/server directory and type ./runserver . The server will start up, telling you the version number and date of the Mesham compiler and then will report when it is ready.
You will need to start a new terminal, now, if you are using MPICH 2, run an mpi deamon via typing mpd & . Create a mesham source file (look in the language documentation for information about the language itself) and compile it via mc. For instance, if the source file name is hello.mesh compile it via mc hello.mesh . You should see an executable called hello
Run the executable via ./hello (or whatever its called.) You do not need to (although can if you want) run it via the mpirun or mpiexec command as the executable will automatically spawn the number of processes it requires.
If you dont wish to compile, but just view the generated C code, you can run linuxgui.sh in compiler/java
Nb: If you wish to change the configuration information created by the installer (for an advanced user, this is not required) then you can - the installer tells you where it has written its config files and the documentation is included in the respective source folders.
== Runtime Library Options ==
Included in the runtime library (0.2) are a number of optional aspects which are disabled by default. These can be enabled by editing the make file and removing the ''#'' before the specific line. The two optional aspects are the files in support of the Gadget-2 port (peano hilbert curve, snapshot files and the parameter file) and the HDF5 support (requires the HDF5 library to be installed on the machine.)
f8ba7b9e7768083ed0fc63c6a4db07efc532645b
Download rtl 0.2
0
159
875
874
2019-04-15T15:44:47Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Runtime library 0.2|author=[[User:polas|Nick Brown]]|desc=The runtime library required for Mesham 0.5.|url=http://www.mesham.com|image=Runtimelibrary.png|version=0.2|released=January 2010}}
''Please Note: This version of the runtime library is deprecated but required for [[Download_0.5|Mesham 0.5]]''
== Runtime Library Version 0.2 ==
Version 0.2 is a legacy version of the Mesham RTL and is required by Mesham 0.5. This version of the library contains many advantages and improvements over the previous version and as such it is suggested you use this. The version on this page is backwards compatable to version 0.41(b). This version does not explicitly support the Windows OS, although it will be possible for an experienced programmer to install it on that system.
== Download ==
You can download the [http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2 here] (28KB)
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 0.5|Download 0.5 Package]] page.
c03a27f82ed564f4f3572a8f41b9f66c2ba12a65
File:Flexdetail.jpg
6
160
877
876
2019-04-15T15:44:47Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Flexibo translation in detail
ed996494fbc47b463d3de57ba1ef36c89c656483
File:Overview.jpg
6
161
879
878
2019-04-15T15:44:47Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Overview of Translation Process
194801d32004be3229ac704ed630d88f5ac83f55
The Arjuna Compiler
0
162
888
887
2019-04-15T15:44:48Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
== Overview ==
''' This page refers to the [[Arjuna]] line of compilers which is up to version 0.5 and is legacy with respect to the latest [[Oubliette]] 1.0 line'''
Although not essential to the programmer, it is quite useful to know the basics of how the implementation hierachy works.
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as LOGS) needs to be also linked in. The runtime library performs two roles - firstly it is architecture specific (and versions exist for Linux, Windows etc..) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code.
<center>[[Image:overview.jpg|Overview of Translation Process]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can execute it by double clicking it, the program will automatically spawn the number of processors required. Secondly the executable can be run via the mpi deamon, and may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
== Translation In More Detail ==
The translator itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor, written in Java, which will do a number of jobs, such as adding scoping information. When this is complete it then gets sent to the translation server - from the design of FlexibO, the language we wrote the translator in, the actual translation is performed by a server listening using TCP/IP. This server can be on the local machine, or a remote one, depending exactly on the setup of your network. Once translation has completed, the generated C code is sent back to the client via TCP/IP and from there can be compiled. The most important benefit of this approach is flexibility - most commonly we use Mesham via the command line, however a web based interface also exists, allowing the code to be written without the programmer installing any actual software on their machine.
<center>[[Image:flexdetail.jpg|Flexibo translation in detail]]</center>
== Command Line Options ==
* '''-o [name]''' ''Select output filename''
* '''-I[dir]''' ''Look in the directory (as well as the current one) for preprocessor files''
* '''-c''' ''Output C code only''
* '''-t''' ''Just link and output C code''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-s''' ''Silent operation (no warnings)''
* '''-f [args]''' ''Forward Arguments to C compiler''
* '''-pp''' ''Just preprocess the Mesham source and output results''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-debug''' ''Display compiler structural warnings before rerunning''
== Static and Dynamic Linking Against the RTL ==
The option is given to statically or dynamically link against the runtime library. Linking statically will actually place a copy of the RTL within your executable - the advantage is that the RTL need not be installed on the target machine, the executable is completely self contained. Linking dynamically means that the RTL must be on the target machine (and is linked in at runtime), the advantage to this is that the executable is considerably smaller and a change in the RTL need not result in all your code requiring a recompile.
0d156f5cb49d5db27a4034700f9ea364b810ae48
Wish List
0
163
890
889
2019-04-15T15:44:48Z
Polas
1
1 revision imported
wikitext
text/x-wiki
We have numerous items with which we appreciate any assistance, these include:
* Assistance with compiler implementation
* Assistance with language design
* Improving the documentation online
* Providing more code examples
* Improving the website
* Anything else.... ''just tell us you are working on it''
b6dacc006098c5353ec884c202986556c1544d52
MediaWiki:Sidebar
8
164
895
894
2019-04-15T15:44:48Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** downloads|Downloads
** What is Mesham|What is Mesham
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
75d8dbed3fe525b5805fb64369c157b4c974c204
Downloads
0
165
910
909
2019-04-15T15:44:48Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
<metadesc>All the files provided for downloads involved with Mesham</metadesc>
''This page contains all the downloads available on this website''
== Latest compiler ==
These are the latest ([[Oubliette|oubliette]]) compiler files
== Language specification ==
[http://www.mesham.com/downloads/specification1a3.pdf Mesham language specification 1.0a3]
== Legacy Arjuna compiler files ==
The [[Arjuna]] compiler line is legacy, but we have kept the downloads available in case people find them useful
[http://www.mesham.com/downloads/mesham5.tar.gz Mesham Version 0.5] ''legacy''
[http://www.mesham.com/downloads/libraries2.tar.gz Runtime Library 0.2] ''legacy''
[http://www.mesham.com/downloads/all04b.zip Mesham Version 0.41(b)] ''legacy''
[http://www.mesham.com/downloads/libraries01.zip Runtime Library 0.1 source] ''legacy''
[http://www.mesham.com/downloads/win32binlibrary01.zip Runtime Library 0.1 Win32 binary] ''legacy''
== Example codes ==
[http://www.mesham.com/downloads/npb.tar.gz NASA's Parallel Benchmark IS]
[http://www.mesham.com/downloads/mandle.mesh Mandelbrot]
[http://www.mesham.com/downloads/pi.mesh Dartboard Method to find PI]
[http://www.mesham.com/downloads/fact.mesh Prime Factorization]
[http://www.mesham.com/downloads/prefix.mesh Prefix Sums]
[http://www.mesham.com/downloads/fftimage.zip Image Processing using Filters]
== Misc ==
[http://www.mesham.com/downloads/Gui.zip Parallel Random Access Machine Simulator]
[http://www.mesham.com/downloads/apl.zip APL, the very simple programming language for the PRAM simulator]
ca879a7ea1a179032f349badf862424dd7f70d46
File:2gb.jpg
6
166
912
911
2019-04-15T15:44:48Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Fast Fourier Transformation with 2GB of data
729d28baa79fd9f53106a7732768ce410b323819
File:128.jpg
6
167
914
913
2019-04-15T15:44:48Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Fast Fourier Transformation example performed with 128MB data
9673f48589455b2c2e20aa52d4982130e782a79c
File:Mandlezoom.jpg
6
168
916
915
2019-04-15T15:44:48Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Mandelbrot Performance Tests
56594bf810192a48e1ce114b660f32c20a23f5a8
File:Classc.jpg
6
169
918
917
2019-04-15T15:44:48Z
Polas
1
1 revision imported
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class C
67f08d79b2a9e83a032fb5034744f2ce3905862e
File:Classb.jpg
6
170
920
919
2019-04-15T15:44:49Z
Polas
1
1 revision imported
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class B
8d320be9de4ed6ba04c6c52f56a8c0132f826055
File:Total.jpg
6
171
922
921
2019-04-15T15:44:49Z
Polas
1
1 revision imported
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Total Million Operations per Second
e52f52f4684a6027386206f785248aa917b0cfa9
File:Process.jpg
6
172
924
923
2019-04-15T15:44:49Z
Polas
1
1 revision imported
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Million Operations per Second per Process
5b31c180dca090e6f04338f0483305428ace98e5
Download all
0
173
926
925
2019-04-15T15:44:49Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Download 0.4 beta]]
fb600f41038a18ac86401b6794c59d2416f2c8e0
Download 0.4 beta
0
174
928
927
2019-04-15T15:44:49Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Download 0.41 beta]]
3c0e33103c156212989cb34560adc6935c552cd4
Arjuna
0
175
937
936
2019-04-15T15:44:49Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
[[File:mesham.gif|right]]
==Introduction==
The Arjuna line of compilers for Mesham are versioned from 0.0 up to 0.99. The latest compiler release based upon the Arjuna core is [[Download_0.5|0.5]]. The reason for the distinction is that it was decided to rewrite the compiler and as such a clear distinction between the architectures and technology is useful. Arjuna was the informal name of the language, and specifically compiler before Mesham was decided upon.
== Download ==
'''The Arjuna line is entirely deprecated now, please use the [[Oubliette]] line'''
It is possible to download the latest Arjuna line version 0.5 [[Download_0.5|here]] and the compatible runtime can be found [[Download_rtl_0.2|here]]. Whilst the website examples and documentation have moved on, you can view the change lists to understand how to use the Arjuna line.
We also provide an earlier version (0.41b) which is the last released version to support the Windows operating system. That version can be downloaded [[Download_0.41_beta|here]] and the corresponding runtime library [[Download_rtl_0.1|here]].
==Technology==
Arjuna is based upon a number of different technologies. The main compiling system is written in FlexibO, an OO experimental language designed to be used for compiler writing (certainly the biggest project in this language. ) The reason for this choice was that using this language the compiler was fast to write, very flexible but quite slow in translation. This code aspect is around 20,000 lines which pushed flexibo to, and in some cases beyond it's limits. FlexibO abstracts the syntactic stage, providing automatic lexing and parsing. The core compiler is based around a reflection system, with the type and function libraries, also written in flexibo, quite seperate and connected in via defined services.
FlexibO does have it's limits and as such a preprocessor is written in Java to convert Mesham into a preprocessed form for use by the core compiler. This preprocessor, around 2000 lines, is used as a band aid to flexibo and for instance adds in scoping information without which the compiler would not operate.
The third major aspect, although not integrated with the compiler is the runtime support library. This has been written in C, around 3000 lines, and a version exists for each machine architecture to support portability. The runtime library in the next line of compilers, [[Oubliette]], is actually based on the existing RTL, but changes and modifications to the language specification mean that the two are not mutually compatible.
For more information about the Arjuna compiler then have a look at [[The Arjuna Compiler]]
==Advantages==
Arjuna works by the compiler writer hand crafting each aspect, whether it is a core function of library, specifying the resulting compiled code and any optimisation to be applied. Whilst this results in very efficient results, it is time consuming and does not allow the Mesham programmer to specify their own types in thief code. Arjuna is also very flexible, vast changes in the language were quite easy to implement, this level of flexability would not be present in other solutions and as such from an iterative language design view it was an essential requirement.
==Disadvantages==
So why rewrite the compiler? Flexability comes at a price, slow compilation. Now the language has reached a level of maturity the core aspects can be written without worry that they will change much. Also it would be good to allow programmers to design and implement types in their own Mesham code, which the architecture of Arjuna would find difficult to support (although not impossible. )
There is the additional fact that Arjuna has been modified and patched so much the initial clean design is starting to blur, with the lessons learned a much cleaner compiler cam be created.
5ff5b5348b37f24f4083b5955d0e254d80e29f04
Oubliette
0
176
968
967
2019-04-15T15:44:50Z
Polas
1
30 revisions imported
wikitext
text/x-wiki
[[File:oubliette.png|right]]
== Introduction ==
Oubliette is the Mesham compiler line from version 1.0 onwards. This line is completely rewritten from the previous [[Arjuna]], using lessons learned and the fact that the language has reached a stable state in terms of definition and the type oriented approach.
== Download ==
You can download the latest compiler and supporting libraries '''[[Download 1.0|here]]'''.
== Implementation ==
Oubliette is written in C++ and uses Flex for tokenisation and Bison for parsing. The type library is entirely separate and it is intended in the future to support extra libraries via dynamic libraries. Unline [[Arjuna]], which has the standard function library hard coded in the compiler, Oubliette just considers these to be normal Mesham source code files which are included by the programmer. This approach gives increased flexibility and a cleaner compiler.
All the types have been entirely rewritten and an API (documentation to follow) has been created to allow for third party extensions.
== Idaho ==
To support this new version of the compiler, the runtime library has been reengineered and released under the name Idaho. This version supports the functionality required by Mesham, the standard library and some internal housekeeping work.
== Update history ==
=== Latest (to be released) ===
=== Build 411 (August 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a6.pdf specification 1a6]
* Updated proc, par and group semantics to be none blocking on entry to the blocks
* Sleep system function added
* Abstracted all communications into a lower level communications layer
* Additional version and requirements reporting in the resulting executable
* Heap default bug fix for reference records
* Threading support added which allows for virtual processors to be lightweight threads rather than processes
=== Build 356 (May 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a5.pdf specification 1a5]
* Reworked expression grammar for improved parsing
* Extended string function library to include more advanced handling functions
* Inline if operator supported
* Texas range in group parallel statement
* Primitive collective communication types accept arrays of data sizes and displacements
* Additional assignment operators involving plus, subtraction, multiplication, division and modulus assignments
* Array distribution type added
* Support dynamic partitioning and distribution of data
* Improved support for cycling and distinct number of multiple partitions per process
* Remote reading and writing depending on global or per block coordinates supported for partitions
* Local partition block copying on assignment
* Eager one sided communication type, which completes as soon as issued, added to library
=== Build 299 (March 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a4.pdf specification 1a4]
* Support for dynamic number of processes above minimum value
* Primitive communication and modes support dynamic (decided at runtime) PID, size and operation arguments
* -p flag added to all Mesham executables which reports the minimum number of processes needed
* Functions can be in any order even if we are using the return type to declare a variable
* Support for dynamically loading type libraries which are provided as extension .so libraries
* Support for external runtime libraries to be linked in during compilation
* Environment arguments provided to underlying C compiler for optimisation
* Improvements to dynamic partitioning runtime support
=== Build 241 (January 2013) ===
* Based on [http://www.mesham.com/downloads/specification1a3.pdf specification 1a3]
* First alpha release of the oubliette compliler
4a36fbe5b496c73b4fd76247163a8eb4ee1e2478
Specification
0
177
979
978
2019-04-15T15:44:50Z
Polas
1
10 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham is a type oriented programming language allowing the writing of high performance parallel codes which are efficient yet simple to write and maintain</metadesc>
{{Applicationbox|name=Specification 1.0a_6|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham language specification|url=http://www.mesham.com|image=Spec.png|version=1.0a_6|released=August 2013}}
''The latest version of the Mesham language specification is 1.0a_6''
== Version 1.0a_6 - August 2013 ==
''Please note that this is an alpha version and as such the specification is liable to change.''
The latest version of the language specification, 1.0a_6 is available for download. This version was released August 2013 and is the base specification version in the 1 series. It builds upon the previous 0.5 language by formalising some of the aspects of the language and the programming model. The type library has been formalised to contain much of the 0.5 language types but with a view to maximising consistency. The function library has been overhauled with the aim of providing a basic set of functionality which can be used by the programmer.
Download [http://www.mesham.com/downloads/specification1a6.pdf this latest version here]
549852b7dc57e9343f767d658301d7d087d44fb3
CandyStrawser428
0
178
981
980
2019-04-15T15:44:51Z
Polas
1
1 revision imported
wikitext
text/x-wiki
he proportion of shops in Britain lying empty has hit a new record of 14.6% in February, according to figures compiled by the Local Data Company.
Vacancy rates had begun to stabilise at the end of 2011, but they have risen in January and February, the LDC said.
It is further evidence of a difficult start to the year for retailers.
Consumer confidence also slipped back in February, the latest survey from Nationwide indicated, largely due to concerns about employment prospects.
Continue reading the main story
�Start Quote
It is a timely reminder to the government... of the significant challenges facing town and city centres up and down the country�
End Quote Matthew Hopkinson Local Data Company
* High Street casualties
* Cautious consumers 'pay off debt'
* Job woes hit consumer confidence
* Sharp decline in UK retail sales
There was an increase in the number of respondents describing their economic situation as bad.
"Consumers also scaled back their expectations for the future, with the forward-looking aspects of the index weakening during the month," said Nationwide chief economist Robert Gardner.
New figures from the Bank of England, also released on Friday, back this up.
Cautious consumers are choosing to pay off credit cards and loans, rather than take on new borrowing, the data indicate.
Evans Cycles, one of the UK's biggest bike retailers, told the BBC's Today programme that it was having to be very conscious about prices.
"We are peddling into a headwind in terms of the consumer economy," said chief executive Nick Wilkinson. "Confidence remains low, getting people to spend money on a bike is about persuading them that it is value for money."
However, Nationwide added that the number of consumers planning to buy household goods - an indicator of confidence - was higher in February than a year earlier.
This reflects official retail sales data for the month, published by the Office for National Statistics (ONS) on Thursday.
Sales volumes declined by a larger-than-expected 0.8% in February, the ONS said.
But they were still 1% higher than a year earlier.
'Damaged' High Streets
The Local Data Company said the rise in empty premises was "not unexpected" as retailers continue to cut back and even go bust.
Game, which has 600 High Street branches in the UK, said this week that it was going into administration after key suppliers stopped doing business with them. It is continuing to trade while it tries to find a solution to its debt problems.
Continue reading the main story
'Most at risk towns and cities'
* Bradford
* Derby
* Wolverhampton
* Southampton
* Hull
* Sheffield
* Swindon
* Warrington
* Stockport
* Nottingham
Source: BNP Paribas Real Estate ranks retail centres according to likelihood of more shops closing and the place's ability to withstand a weakening economy
"It is a timely reminder to the government, who are due to respond to the Portas Review this month, of the significant challenges facing town and city centres up and down the country," said LDC director Matthew Hopkinson.
Retail consultant and TV presenter Mary Portas was asked by the government to look at ways to revitalise struggling town centres.
Her report, published in December 2011, recommended cutting regulations and proposed a bigger role for street markets in town centres.
"It's crucial that the government responds to Mary's review with a menu of recommendations next week that local people, councils and businesses can 'pick and mix' to help start to reverse the damage that many of our high streets have suffered," said Ian Fletcher, director of policy at the British Property Federation.
784ade45552b0b3fd4e6d7259c9083a2102be78d
Include
0
179
987
986
2019-04-15T15:44:51Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location
== Example ==
#include "test.mesh"
#include <io>
The preprocessing stage will replace the first include with the contents of ''test.mesh'', followed by the second include replaced by ''io''. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
''Since: Version 1.0''
[[Category:preprocessor]]
89ecdfa69e81e809e4cf9bab090f498a61b970b0
Include once
0
180
990
989
2019-04-15T15:44:51Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Syntax ==
<nowiki>#</nowiki>include_once [sourcefile]
== Semantics ==
Will read in the Mesham source file specified and will embed the contents of this source file into the code at the current location IF AND ONLY IF that specific file has not already been included before. This is a very useful mechanism to avoid duplicate includes when combining together multiple libraries.
== Example ==
#include_once "test.mesh"
#include_once "test.mesh"
The preprocessing stage will replace the first include with the contents of ''test.mesh'', but the second include_once will be ignored because that specific file has already been included. In the absence of the ''.mesh'' ending, the preprocessor will attempt to match on the absolute filename first and if this can not be found will then look for a file with the corresponding name and ''.mesh'' ending.
The preprocessor will search the include directories when the filename is contained in quotation marks. If contained within ''< >'' then the preprocessor will search the system include locations too which have priority.
''Since: Version 1.0''
[[Category:preprocessor]]
67f9cfa9082f92d9e2e06a21298503496b646339
Group
0
181
1005
1004
2019-04-15T15:44:51Z
Polas
1
14 revisions imported
wikitext
text/x-wiki
== Syntax ==
group n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub><br>
{<br>
group body<br>
};<br>
where n<sub>1</sub>,n<sub>2</sub>,...,n<sub>d</sub> are specific process ranks; values, variables or texas range (with limits) known at compile time.
== Semantics ==
Will execute the group body on different processes as specified by the programmer. This allows the programmer to write code MPMD style and, unlike the ''par'' block, Mesham guarantees process placement. Variables declared to be multiply allocated within parallel scope, such as a par block, will automatically be allocated just to the subgroup of processes within that scope.<br><br>
''Note:'' Unlike a ''par'' loop, the ''group'' guarantees that the ranks supplied will be the ranks of those processes executing the block code.<br>
''Note:'' Texas range of ... is supported, although this can only be between values (specifies a range) and the previous value must be smaller than or equal to the following one.
== Example ==
#include <io>
function void main() {
group 0, 3 {
print("Hello world from pid 0 or 3\n");
};
group 1,...,3,5,...,8 {
print("Hello world from pid 1, 2, 3, 5, 6, 7 or 8\n");
};
};
The code fragment will involve 9 processes (0 to 8 inclusive.) Only process zero and process three will display the first message and the second is displayed by more as described by the texas range.
''Since: Version 1.0''
[[Category:Parallel]]
579a2c9fc2fb20c2854e2eacd859867573d26b72
Short
0
182
1010
1009
2019-04-15T15:44:51Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Syntax ==
Short
== Semantics ==
A single whole, 16 bit, number.
=== Default typing ===
{{ElementDefaultTypes}}
== Example ==
function void main() {
var i:Short;
};
In this example variable ''i'' is explicitly declared to be of type ''Short''.
''Since: Version 1.0''
== Communication ==
{{ElementTypeCommunication}}
[[Category:Element Types]]
[[Category:Type Library]]
48db9041d021682ecc620a1978233cbb4c48060b
Template:ElementDefaultTypes
10
183
1013
1012
2019-04-15T15:44:51Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
In the absence of further type information, the following types are added to the chain:
* [[allocated]]
* [[multiple]]
* [[stack]]
* [[onesided]]
054b8a87e6d2346be0d60a15229cccf84f0b88f5
Stack
0
184
1019
1018
2019-04-15T15:44:52Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
stack[]
== Semantics ==
Instructs the environment to bind the associated variable to stack frame memory which exists for a specific function only whilst it is ''alive.'' Once the corresponding function has returned then the memory is freed and hence this variable ceases to exist.<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
function void main() {
var i:Int :: allocated[stack];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the stack frame of the current function. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
f6693fc301e6aa97a613855f215ad03695868192
Heap
0
185
1026
1025
2019-04-15T15:44:52Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Syntax ==
heap[]
== Semantics ==
Instructs the environment to bind the associated variable to heap memory which exists regardless of runtime context.<br><br>
''Note:'' All heap memory is garbage collected. The specifics of this depends on the runtime library, broadly when it goes out of scope then it will be collected at some future point. Although not nescesary, you can assign the ''null'' value to the variable which will drop a reference to the memory.
''Note:'' This type, used for function parameters or return type instructs pass by reference
== Example ==
function void main() {
var i:Int :: allocated[heap];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on the heap. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
75eba820c64997cc5b3af905d3cefc01f4ec6f13
Static
0
186
1032
1031
2019-04-15T15:44:52Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Syntax ==
static[]
== Semantics ==
Instructs the environment to bind the associated variable to static memory. Because it is allocated into static memory, this is the same physical memory per function call and loop iteration (environment binding only occurs once.)<br><br>
''Note:'' This type, used for function parameters or return type instructs pass by value
== Example ==
function void main() {
var i:Int :: allocated[static];
};
In this example variable ''i'' is declared as an integer and allocated to all processes (by default) and also on static memory. Note how we have omitted the optional braces to the ''stack'' type as there are no arguments.
''Since: Version 1.0''
== Default allocation strategies ==
{{Template:DefaultMemoryAllocation}}
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Allocation Types]]
73ceadc619419c5965d3c2c7e39c99da668c2558
Template:DefaultMemoryAllocation
10
187
1034
1033
2019-04-15T15:44:52Z
Polas
1
1 revision imported
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Type
! Default allocation strategy
|-
| [[:Category:Element Types|All element types]]
| [[Stack]]
|-
| [[Array]]
| [[Heap]]
|-
| [[Record]]
| [[Stack]]
|-
| [[Referencerecord|Reference record]]
| [[Heap]]
|}
566e725490d0f853bfaba7ca88d4f8cf04193b0a
Template:ReductionOperations
10
188
1037
1036
2019-04-15T15:44:52Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="left"
! Operator
! Description
|-
| max
| Identify the maximum value
|-
| min
| Identify the minimum value
|-
| sum
| Sum all the values together
|-
| prod
| Generate product of all values
|}
2af12cb1ab4f0b0538c77b96fec83ff7e9ffac5c
Category:Compound Types
14
189
1039
1038
2019-04-15T15:44:52Z
Polas
1
1 revision imported
wikitext
text/x-wiki
[[Category:Type Library]]
59080a51ca9983880b93aaf73676382c72785431
Sin
0
190
1045
1044
2019-04-15T15:44:53Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This sin(d) function will find the sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find sine of
* '''Returns:''' A [[Double]] representing the sine
== Example ==
#include <maths>
function void main() {
var a:=sin(98.54);
var y:=sin(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
1bf701a3975874ef8d7b79f93cad35e9ce4db53a
Tan
0
191
1051
1050
2019-04-15T15:44:53Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
This tan(d) function will find the tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the tangent of
* '''Returns:''' A [[Double]] representing the tangent
== Example ==
#include <maths>
function void main() {
var a:=tan(0.05);
var y:=tan(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
d67f1b0fc6a1f729c22f6eb54c1fd4d62b82fc25
Acos
0
192
1058
1057
2019-04-15T15:44:53Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
== Overview ==
The acos(d) function will find the inverse cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse cosine of
* '''Returns:''' A [[Double]] representing the inverse cosine
== Example ==
#include <maths>
function void main() {
var d:=acos(10.4);
var y:=acos(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
e7ca8b4dffeb65f5987cb0d86289f816ad66ef5c
Asin
0
193
1064
1063
2019-04-15T15:44:53Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
The asin(d) function will find the inverse sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse sine of
* '''Returns:''' A [[Double]] representing the inverse sine
== Example ==
#include <maths>
function void main() {
var d:=asin(23);
var y:=asin(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
ca2ee53e5aac063485d2a3761ae262f6ce52f14b
Atan
0
194
1070
1069
2019-04-15T15:44:53Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
== Overview ==
The atan(d) function will find the inverse tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the inverse tangent of
* '''Returns:''' A [[Double]] representing the inverse tangent
== Example ==
#include <maths>
function void main() {
var d:=atan(876.3);
var y:=atan(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
110a6e4c40e637fb0021356dece79e5a2086df0f
Cosh
0
195
1075
1074
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
The cosh(d) function will find the hyperbolic cosine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic cosine of
* '''Returns:''' A [[Double]] representing the hyperbolic cosine
== Example ==
#include <maths>
function void main() {
var d:=cosh(10.4);
var y:=cosh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
285dfd293f100de431db1ccafc6a7a8a938b3b4c
Sinh
0
196
1080
1079
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
The sinh(d) function will find the hyperbolic sine of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic sine of
* '''Returns:''' A [[Double]] representing the hyperbolic sine
== Example ==
#include <maths>
function void main() {
var d:=sinh(0.4);
var y:=sinh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
a8ab9d56598ae9b404186dcbc44c07e9d590a3ae
Tanh
0
197
1085
1084
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
The tanh(d) function will find the hyperbolic tangent of the value or variable ''d'' passed to it.
* '''Pass:''' A [[Double]] to find the hyperbolic tangent of
* '''Returns:''' A [[Double]] representing the hyperbolic tangent
== Example ==
#include <maths>
function void main() {
var d:=tanh(10.4);
var y:=tanh(d);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
9f45406098a6bd8a6a89929c6462917eed3e95ca
Ceil
0
198
1090
1089
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This ceil(d) function will find the smallest integer greater than or equal to ''d''.
* '''Pass:''' A [[Double]] to find the ceil of
* '''Returns:''' An [[Int]] representing the ceiling
== Example ==
#include <maths>
function void main() {
var a:=ceil(10.5);
var y:=ceil(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
ca7f759657ced14b3d68ea3874f9fe15f55687ca
Log10
0
199
1094
1093
2019-04-15T15:44:54Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This log(d) function will find the base 10 logarithmic value of ''d''
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the base 10 logarithmic value
== Example ==
#include <maths>
function void main() {
var a:=log10(0.154);
var y:=log10(a);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:Maths Functions]]
a3be85eeeb434e2934290a031224406429310522
Complex
0
200
1099
1098
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
The ''complex'' type variable is defined within the mathematical library to represent a complex number with real and imaginary components. This is built from a [[record]] type with both components as doubles.
== Example ==
#include <maths>
function void main() {
var a:complex;
a.i:=19.65;
a.r:=23.44;
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
1d02817d5ee922340f5ebbed4d0796f7df3015a9
Close
0
201
1104
1103
2019-04-15T15:44:54Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
The close(f) function will close the file represented by handle ''f''
* '''Pass:''' A [[File]] handle
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("myfile.txt","r");
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
ecc7c40b6f4c9193d8dd13baf2b38663f6bd305d
Open
0
202
1108
1107
2019-04-15T15:44:54Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This open(n,a) function will open the file of name ''n'' with mode of ''a''.
* '''Pass:''' The name of the file to open of type [[String]] and mode of type [[String]]
* '''Returns:''' A file handle of type [[File]]
== Example ==
#include <io>
function void main() {
var f:=open("myfile.txt","r");
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
37dcc748ba2a4854d15fc176a7249151b73b0592
Writestring
0
203
1113
1112
2019-04-15T15:44:55Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This writestring(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[String]] to write
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","w");
writestring(f,"hello - test");
close(f);
};
''Since: Version 0.41b''
[[Category:Function Library]]
[[Category:IO Functions]]
a225f241d485ee11815b9de22d16963d5af7727a
Writebinary
0
204
1117
1116
2019-04-15T15:44:55Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This writebinary(f,a) function will write the value of ''a'' to the file denoted by handle ''f''.
* '''Pass:''' The [[File]] handle to write to and the [[Int]] variable or value to write into the file in a binary manner
* '''Returns:''' Nothing
== Example ==
#include <io>
function void main() {
var f:=open("hello.txt","w");
writebinary(f,127);
close(f);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:IO Functions]]
067db57756ce74a273bc21e9256cbdce6328264c
Itostring
0
205
1121
1120
2019-04-15T15:44:55Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
The itostring(n) function will convert the variable or value ''n'' into a string.
* '''Pass:''' An [[Int]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:=234;
var c:=itostring(a);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
a801bb61bb1b30a65cdb27eb72174c5316d9d306
Dtostring
0
206
1125
1124
2019-04-15T15:44:55Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
The dtostring(d, a) function will convert the variable or value ''d'' into a string using the formatting supplied in ''a''.
* '''Pass:''' A [[Double]] and [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
function void main() {
var a:=23.4352;
var c:=dtostring(a, "%.2f");
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
b5552df822b385b1c05b0ccee8c112db3f006998
Getepoch
0
207
1128
1127
2019-04-15T15:44:55Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This getepoch() function will return the number of milliseconds since the epoch (1st January 1970).
* '''Pass:''' Nothing
* '''Returns:''' [[Long]] containing the number of milliseconds
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
62a04821a2697c24594afdfac428529d7416fc9e
Gc
0
208
1131
1130
2019-04-15T15:44:55Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
The gc() function will collect any garbage memory. Memory allocated via the [[Heap]] type is subject to garbage collection, which will occur automatically during program execution but can be invoked manually via this function call.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
19028e7244b6e1d98c433c5bd2a9c8f2c2da309a
Template:News
10
209
1147
1146
2019-04-15T15:44:56Z
Polas
1
15 revisions imported
wikitext
text/x-wiki
* Mesham at the PGAS 2013 conference, paper downloadable [http://www.pgas2013.org.uk/sites/default/files/finalpapers/Day2/R5/1_paper12.pdf here]
* Specification version 1.0a6 released [http://www.mesham.com/downloads/specification1a6.pdf here]
* Update to Mesham alpha release ''(1.0.0_411)'' available [[Download 1.0|here]]
16d937c0aba3a573746d0588ab0eb726748f7668
Template:Applicationbox
10
210
1150
1149
2019-04-15T15:44:56Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
{| class="infobox bordered" style="background-color:#DDDDDD; border-color:#111111; border-style:solid; border-width:1px; float:right; font-size:90%; margin:5px 5px 5px 5px; text-align:left; width:30em;"
|-
| colspan="2" style="text-align:center; font-size: large;" | '''{{{name}}}'''
|-
! Icon:
| [[Image:{{{image}}}|left|{{{caption}}}]]
|-
! Description:
| {{{desc}}}
|-
! Version:
| {{{version}}}
|-
! Released:
| {{{released}}}
|-
! Author:
| {{{author}}}
|-
! Website:
| {{{url}}}
|}
22cce7a679ed7459a3a917991418a2fc61831c0a
File:Mesham.gif
6
211
1152
1151
2019-04-15T15:44:56Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Mesham arjuna logo
18147eae74106487894c9dcbd40dd8088e84cfd0
File:Runtimelibrary.png
6
212
1154
1153
2019-04-15T15:44:56Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Runtime library icon
4cdf1b63469639f8e3882a9cb001ce3c1443d3fa
File:Spec.png
6
213
1156
1155
2019-04-15T15:44:56Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Language specification
a6c03d5a30547b6c09595ea22f0dbebbeef99f62
Tutorial - Hello world
0
214
1173
1172
2019-04-15T15:44:57Z
Polas
1
16 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham first tutorial providing an introduction to the language</metadesc>
'''Tutorial number one''' - [[Tutorial_-_Simple_Types|next]]
== Introduction ==
In this tutorial we will have a look at writing, compiling and running our first Mesham parallel code. You will see and introduction as to how we structure a program code, use the standard functions and discuss different forms of parallel structure. This tutorial assumes that you have gotten the Mesham compiler and runtime library installed and working on your machine as per the instructions [[Download_1.0|here]].
== Hello world ==
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
=== Compilating and execution ===
Copy and paste this code into a text file and name is ''test.mesh'' - of course it can be called anything but we will assume this name in the tutorial. Compile via issuing the command ''mcc test.mesh'' which will report any errors (there should be none with this example) and produce an executable, in this case named ''test''.
In order to run the code you will need to issue the command ''mpiexec -np 1 ./test'' - this invokes the MPI process manager with one process and runs the executable. Mesham is designed such that, if run with one process only then it will spawn any other processes it needs. However, the code can only be run with the correct number of processes or one - any other number is assumed to be a mistake and will result in an error message.
In running the code you should see the output although the order of the lines may be different:
Hello world from pid=0 with p=0
Hello world from pid=2 with p=2
Hello world from pid=1 with p=1
Hello world from pid=3 with p=3
=== A look under the bonnet ===
Let's take a further look at the code and see exactly what it is doing then. Lines 1 to 3 are including standard function headers - we are using function calls in the program from all three of these sub libraries (''print'' from ''io'', ''pid'' from ''parallel'' and ''itostring'' from ''string''.) By wrapping in the < > braces tells the preprocessor to first look for system includes (as these are.)
Line 5 declares the main function which is the program entry point and all compiled codes that you wish to execute require this function. Only a limited number of items such as type and program variable declaration may appear outside of a function body. At line 6 we are declaring the variable ''p'', but at this point we have opted to provide no further information (such as the type) because this can be deduced on the next line. Line 7 we are using the [[Par|par]] keyword to declare a parallel loop (the parallel equivalent of a [[For|for]] loop) which is basically saying ''execute this loop from 0 to 3 (4) times in parallel running each iteration within its own process.''
Line 8 is executed by four, independent processes, each calling the [[Print|print]] function to display a message to standard out. The return value of the [[Pid|pid]] function, which provides us with the current processes absolute id, and the variable ''p'' are [[Int]] (the later found because ''p'' is used in the [[Par|par]] statement. It is only possible to print out [[String|Strings]], so the [[Itostring|itostring]] function is called to convert between an integer and string value.
At this point it is worth noting two aspect of this code. The first (and very important) one is that all blocks are delimited by sequential composition (;). This is because, in a parallel language, it is important to make explicit whether the blocks are executed one after another (sequentially) or at the same time (parallel.) Secondly, see how we have displayed both the process id (via the [[Pid|pid]] function call) and the value of variable ''p''. Whilst in this simple example it is probably the case, there is no guarantee that these will be equal - the language will allocate the iterations of a [[Par|par]] loop to the processes which it sees fit.
== Making things more interesting ==
We are now going to make things a little more interesting and build upon what we have just seen. You will have just read that the [[Par|par]] loop assigns iterations to the processes which it feels is more appropriate - we are now going to have a look at this in more detail.
#include <io>
#include <parallel>
#include <string>
function void main() {
var p;
skip ||
par p from 0 to 3 {
print("Hello world from pid="+itostring(pid())+" with p="+itostring(p)+"\n");
};
};
Now compile and execute this code in the same manner as described above, you should see some output similiar to (but with a different ordering):
Hello world from pid=1 with p=0
Hello world from pid=2 with p=1
Hello world from pid=4 with p=3
Hello world from pid=3 with p=2
So what's going on here? Well the output it telling us that the first iteration of the [[Par|par]] loop is running on process 1, the second on process 2 etc... The reason for this is the use of parallel composition (||) on line 7. At this line we are in effect saying ''Do nothing using the skip command and at the same time run the par loop.'' In fact a [[Par|par]] loop is syntactic short cut for lots of parallel compositions (in this case we could replace the par loop with four parallel compositions, although the code would look really messy!)
== Absolute process selection ==
We have already said that the [[Par|par]] loop does not make any guarantee as to what iteration is placed upon what process. However, sometimes it is useful to know exactly what is running where. To this end we have two constructs the [[Proc|proc]] and [[Group|group]] statements.
=== Single process selection ===
To select a single process absolutely by its ID number you can use the [[Proc|proc]] statement. The following code illustrates this:
#include <io>
function void main() {
proc 0 {
print("Hello from process 0\n");
};
proc 1 {
print("Hello from process 1\n");
};
};
Which, if you compile and execute, will display two lines of text - the first saying hello from process 0 and the other saying hello from process 1 - although which comes first depends on the speed of the processes and will often vary even between runs!
=== Group process selection ===
Whilst the [[Proc|proc]] statement sounds jolly useful (and it is!) you can imagine if you want to select multiple processes to do the same thing by their absolute process ID then many duplicate proc statements in your code will be quite horrid (and wear out your keyboard!) Instead we supply the [[Group|group]] statement which allows the programmer to select multiple processes to execute the same block. Based upon the previous example code:
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,1,2,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
If you compile and execute this you will get something like:
Hello world from pid=0
Hello world from pid=1
Hello world from pid=2
Hello world from pid=3
See the difference from above? Even though we have the parallel composition here, the [[Group|group]] statement selects processes on their absolute process ID, so you can be sure that processes 0, 1, 2 and 3 are executing that block. In fact, process 0 will first run the skip statement and then the group block in this example. One last thing - notice how we had to remove all references to variable ''p'' here? Because we are no longer using the [[Par|par]] loop, we can not leave the declaration of this variable in the code, as the language has no way to deduce what the type of ''p'' will be and would produce an error during compilation (try it!)
But, isn't it a bit annoying having to type in each individual process id into a group statement? That is why we support the texas range (...) in a group to mean the entire range from one numeric to another.
#include <io>
#include <parallel>
#include <string>
function void main() {
skip ||
group 0,...,3 {
print("Hello world from pid="+itostring(pid())+"\n");
};
};
The above code is pretty much the same as the one before (and should produce the same output) - but see how we have saved ourselves some typing by using the texas range in the group process list. This is especially useful when we are specifying very large ranges of processes but has a number of limits. Firstly the texas range must be between two process ids (it can not appear first or last in the list) and secondly the range must go upwards; so you can not specify the id on the left to be larger or equal to the id on the right.
== Summary ==
Whilst the code we have been looking at here is very simple, in this tutorial we have looked at the four basic parallel constructs which we can use to structure our code and discussed the differences between these. We have also looked at writing a simple Mesham code using the main function and using standard functions via including the appropriate sub libraries.
[[Category:Tutorials|Hello world]]
4ddce3d7f2af8f0bb70ba5a0b468a8caa6c54b01
Tutorial:gettingStarted
0
215
1175
1174
2019-04-15T15:44:57Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[Tutorial - Hello world]]
85fbc14874a26f0ed9ff198aa41fd7d659324dc2
NPB
0
216
1177
1176
2019-04-15T15:44:57Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[NAS-IS Benchmark]]
b13d3afac8c6047488d01f48483a9ea039fc6b11
Category:Example Codes
14
217
1180
1179
2019-04-15T15:44:57Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Category:Tutorials
14
218
1183
1182
2019-04-15T15:44:57Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Tutorial - Simple Types
0
219
1199
1198
2019-04-15T15:44:58Z
Polas
1
15 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham tutorial detailing an overview of how type oriented programming is used in the language</metadesc>
'''Tutorial number two''' - [[Tutorial_-_Hello world|prev]] :: [[Tutorial_-_Functions|next]]
== Introduction ==
In this tutorial we will be looking at a simple use of types in Mesham and how we can change what our code is doing just by modifying the type. It is assumed that the reader has already worked through the [[Tutorial - Hello world|Hello world]] tutorial and is familiar with the concepts discussed there.
== A question of types ==
#include <io>
#include <string>
function void main() {
var a:=78;
print(itostring(a)+"\n");
};
In the above code snippet we have included the appropriate system headers (for printing and integer to string conversion at line 6), specified our program entry point via the main function and declared variable ''a'' to contain the value ''78''. Whilst this looks very simple (and it is) there are some important type concepts lurking behind the scenes. There are three ways of declaring a variable - via explicit typing, by specifying a value as is the case here and the type will be deduced via inference or by specifying neither and postponing the typing until later on (such as in the [[Tutorial - Hello world|Hello world]] tutorial as with variable ''p'' which was inferred to be an [[Int]] later on as it was used in a [[Par|par]] statement.)
In the code above, via type inference, variable ''a'' is deduced to be of type [[Int]] and, in the absence of further types, there are a number of other default types associated with an integer; the [[Stack|stack]] type so specify that it is allocated to the stack frame of the current function, the [[Onesided|onesided]] type which determines that it uses one sided (variable sharing) communication, the [[Allocated|allocated]] type that specifies memory is allocated and lastly the [[Multiple|multiple]] type that specifies that the variable is allocated to all processes. So, by specifying a value the language has deduced, via inference all this behaviour which can be overridden by explicitly using types. Note that these defaults are not just for [[Int|Ints]], they actually apply to all [[:Category:Element Types|element types]].
== Type chains ==
In the previous section we saw that, by default, element types such as [[Int|Ints]] have a default set of type behaviour associated with them. These types are combined together to form a chain. The type chain resulting from the use of an [[Int]] and these defaults is: [[Int]]::[[Onesided|onesided]]::[[Stack|stack]]::[[Allocated|allocated]][ [[Multiple|multiple]][] ]. There are a number of points to note about this chain, firstly the ''::'' operator (the type chaining operator) chains these independent types together and precedence is from right to left - so the behaviour of the types on the right override behaviour of those to the left of them if there is any conflict. For example if we were to append another form of memory allocation, the [[Heap|heap]] type which allocates memory on the heap, to rightmost end of the chain then this would override the behaviour of the [[Stack|stack]] type which would be to the left of it.
#include <io>
#include <string>
function void main() {
var a:Int::stack::onesided::allocated[multiple[]];
a:=78;
print(itostring(a)+"\n");
};
The above code is, in terms of runtime behaviour, absolutely identical to the first code example that we have seen - just we have explicitly specified the type of variable ''a'' to be the type chain that is inferred in the first example. As you can see, it just saves typing being able to write code without all these explicit types in many cases. It is also important to note that we can associated optional information with these types. For instance, we have provided the [[Multiple|multiple]] type as a parameter to the [[Allocated|allocated]] type. Parameters can be anything (further type chains, values or variables known at compile time) and in the absence of further information it is entirely optional to provide empty ''[]'' braces or not.
All type chains must have at least one [[:Category:Element Types|element type]] contained within it. Convention has dictated that all [[:Category:Element Types|element types]] start with a capitalised first letter (such as [[Int]], [[Char]] and [[Bool]]) whereas all other types known as [[:Category:Compound Types|compound types]] start with a lower case first letter (such as [[Stack|stack]], [[Multiple|multiple]] and [[Allocated|allocated]].)
=== Let's go parallel ===
So the code we have seen up until this point isn't very exciting when it comes to parallelism. In the following code example we are involving two processes with shared memory communication:
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
proc 1 {
a:=78;
};
sync;
proc 0 {
print("Value: "+itostring(a)+"\n");
};
};
The important change here has been that we have modified the [[Multiple|multiple]] type to instead be the [[Single|single]] type with the [[On|on]] type provided as a parameter and then the value ''0'' to this type. What this is doing is allocating variable ''a'' to the memory of process 0 only. Note how we have also omitted the [[Stack|stack]] and [[Onesided|onesided]] types - they are still added by default as we have not specified types to control memory or the communication method - but omitting them makes the code more readable.
In the first [[Proc|proc]] block, process 1 is writing the value ''78'' to variable ''a''. Because this variable is held on process 0 only and is not local to process 1 this will involve some form of shared memory communication to get that value across (as defined in the [[Onesided|onesided]] communication type which is used by default. Process 0, in the second [[Proc|proc]] block, will read out the value of variable ''a'' and display this to standard output. A very important aspect of this code is found on line 9 and is the [[Sync|sync]] keyword. The default shared memory communication is not guaranteed to complete until the appropriate synchronisation has occurred. This acts both as a barrier and all processes which need to will then write their values of ''a'' to the target remote memory. Synchronisation is Concurrent Read Concurrent Write (CRCW), which means that between synchronisation multiple processes are allowed to read and write to the same locations any number of times, although with writing there is no guarantee which value will be used if they are different in the same step. Additionally you can see how we have specified the variable name after the [[Sync|sync]] here, this just means to synchronise on that variable alone - if you omit it then it will synchronise on all outstanding variables and their communications.
''Exercise:'' Comment out the synchronisation line and run the code again - see now process 0 reports the value as zero? This is because synchronisation has not occurred and the value has not been written (by default an [[Int]] is initialised to the zero value.)
=== Further parallelism ===
We have very slightly modified the code below:
#include <io>
#include <string>
var master:=1;
var slave:=0;
function void main() {
var a:Int::allocated[single[on[master]]];
proc slave {
a:=78;
};
sync;
proc master {
print("Value: "+itostring(a)+"\n");
};
};
You can see that here we have added in two variables, ''master'' and ''slave'', which control where the variable is allocated to and who does the value writing. Try modifying these values, although be warned by changing them to large values will cause the creation of many processes who do nothing as the [[Proc|proc]] construct will create the preceding processes to honour the process ID; for instance if you specify the ''master'' to be 90, then processes 0 to 90 will be created to ensure that the process with ID 90 executes that specific block. The limitation here is that the value of these variables must be known at compile time, so it is fine to specify them in the code like this but that could not, for example, be the result of some user input or command line argument. Also note how we have declared these variables to have global program scope by declaring them outside of the function. Of course we could just have easily placed them inside the main function but this was to illustrate that declaring variables is allowed in global scope outside of a function body.
== Changing the type ==
As the Mesham code runs we can change the type of a variable by modifying the chain, this is illustrated in the following code:
function void main() {
var a:Int;
a:=23;
a:a::const;
a:=3;
};
Try to compile this - see an error at line 5? Don't worry, that was entirely expected - because we are typing variable ''a'' to be an [[Int]] (and all the defaults types that go with it), performing an assignment at line 3 which goes ahead fine but then at line 4 we are modifying the type of ''a'' via the set type operator '':'' to be the current type of ''a'' chained with the [[Const|const]] type which forces the variable to be read only. Hence the assignment at line 5 fails because the type of variable ''a'' has the [[Const|const]] type in the chain. By removing this assignment or the type modification at line 4 the code will compile fine.
Modifying types in this form can be very powerful but there are some points to bear in mind. Firstly it is not possible to modify the [[Allocated|allocated]] type or its contents as we are changing the behaviour of a variable but not if and where it is allocated in memory, doing so will result in an error. Secondly, modifying a type will bind this modification to the local scope and once we leave this scope then the type shall be reverted back to what it was before.
function void main() {
var a:Int;
a:=23;
a::const:=3;
};
It is also possible to modify the type chain of a variable just for a specific assignment or expression. The code above will also fail to compile because the programmer has specified that just for the assignment at line 4, to append the [[Const|const]] type to the end of the type chain of variable ''a''. If you remove this type modification then the code is perfectly legal and will compile and execute fine.
[[Category:Tutorials|Simple Types]]
c640afcc6976434738239cdc292c0a7cbb1dee5b
Tutorial - Functions
0
220
1206
1205
2019-04-15T15:44:58Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of functions and functional abstraction in Mesham</metadesc>
'''Tutorial number three''' - [[Tutorial_-_Simple Types|prev]] :: [[Tutorial_-_Parallel Constructs|next]]
== Introduction ==
In this tutorial we will be looking at the use of functions in Mesham, both writing our own functions and calling others. Functional abstraction is a very useful aspect to many languages and allows for one to make their code more manageable. We shall also take a look at how to provide optional command line arguments to some Mesham code.
== My first function ==
#include <io>
#include <string>
function Int myAddFunction(var a:Int, var b:Int) {
return a+b;
};
function void main() {
var a:=10;
var c:=myAddFunction(a,20);
print(itostring(c)+"\n");
};
The above code declares two functions, ''myAddFunction'' which takes in two [[Int|Ints]] and return an [[Int]] (which is the addition of these two numbers) and a ''main'' function which is the program entry point. In our ''main'' function you can see that we are calling out to the ''myAddFunction'' using a mixture of the ''a'' variable and the constant value ''20''. The result of this function is then assigned to variable ''c'' which is displayed to standard output.
There are a number of points to note about this - first notice that each function body is terminated via the sequential composition (;) token. This is because all blocks in Mesham must be terminated with some composition and functions are no exception, although it is meaningless to terminate with parallel composition currently. Secondly, move the ''myAddFunction'' so that it appears below the ''main'' function and recompile - see that it still works? This is because functions in Mesham can be in any order and it is up to the programmer to decide what order makes their code most readable. As an exercise notice that we don't really need variable ''c'' at all - remove it and in the [[Print|print]] function call replace the reference to ''c'' with the call to our own function itself.
== Function arguments ==
By default all [[:Category:Element Types|element types]] and [[Record|records]] are pass by value, whereas [[Array|arrays]] and [[Referencerecord|reference records]] are pass by reference. This is dependant on the manner in which these data types are allocated, the former using the [[Stack|stack]] type whereas the later using the [[Heap|heap]] type. We can determine whether a function's arguments and return value are pass by value or reference by specifying the [[Stack|stack]] (value), [[Static|static]] (value) or [[Heap|heap]] (reference) type in the chain.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int) {
mydata:=76;
};
If you compile and execute the following code, then you will see the output ''10'' which is because, by default, an [[Int]] is pass by value such that the value of ''a'' is passed into ''myChangeFunction'' which sets ''mydata'' to be equal to this. When we modify ''mydata'', because it has entirely different memory from ''a'' then it has no effect upon ''a''.
#include <io>
#include <string>
function void main() {
var a:=10;
myChangeFunction(a);
print(itostring(a)+"\n");
};
function void myChangeFunction(var mydata:Int::heap) {
mydata:=76;
};
This code snippet is very similar to the previous one, but we have added the [[Heap|heap]] type to the chain of ''mydata'' - if you compile and execute this you will now see the output ''76''. This is because, by using the [[Heap|heap]] type, we have changed to pass by reference which means that ''mydata'' and ''a'' share the same memory and hence a change to one will modify the other. As far as function arguments go, it is fine to have a variable memory allocated by some means and pass it to a function which expects memory in a different form - such as above, where ''a'' is (by default) allocated to stack memory but ''mydata'' is on heap memory. In such cases Mesham handles the necessary transformations.
=== The return type ===
function Int::heap myNewFunction() {
var a:Int::heap;
a:=23;
return a;
};
The code snippet above will return an [[Int]] by its reference when the function is called, internal to the function which are creating variable ''a'', allocating it to [[Heap|heap]] memory, setting the value and returning it. However, an important distinction between the function arguments and function return types is that the memory allocation of what we are returning must match the type. For example, change the type chain in the declaration from ''Int::heap'' to ''Int::stack'' and recompile - see that there is an error? When we think about this logically it is the only way in which this can work - if we allocate to the [[Stack|stack]] then the memory is on the current function's stack frame which is destroyed once that function returns; if we were to return a reference to an item on this then that item would no longer exist and bad things would happen! By ensuring that the memory allocations match, we have allocated ''a'' to the heap which exists outside of the function calls and will be garbage collected when appropriate.
== Leaving a function ==
Regardless of whether we are returning data from a function or not, we can use the [[Return|return]] statement on its own to force leaving that function.
function void myTestFunction(var b:Int) {
if (b==2) return;
};
In the above code if variable ''b'' has a value of ''2'' then we will leave the function early. Note that we have not followed the conditional by an explicit block - this is allowed (as in many languages) for a single statement.
As an exercise add some value after the return statement so, for example, it reads something like like ''return 23;'' - now attempt to recompile and see that you get an error, because in this case we are attempting to return a value when the function's definition reports that it does no such thing.
== Command line arguments ==
The main function also supports the reading of command line arguments. By definition you can provide the main function with either no function arguments (as we have seen up until this point) or alternatively two arguments, the first an [[Int]] and the second an [[Array|array]] of [[String|Strings]].
#include <io>
#include <string>
function void main(var argc:Int, var argv:array[String]) {
var i;
for i from 0 to argc - 1 {
print(itostring(i)+": "+argv[i]+"\n");
};
};
Compile and run the above code, with no arguments you will just see the name of the program, if you now supply command line arguments (separated by a space) then these will also be displayed. There are a couple of general points to note about the code above. Firstly, the variable names ''argc'' and ''argv'' for the command line arguments are the generally accepted names to use - although you can call these variables what ever you want if you are so inclined.
Secondly notice how we only tell the [[Array|array]] type that is is a collection of [[String|Strings]] and not any information about its dimensions, this is allowed in a function argument's type as we don't always know the size, but will limit us to one dimension and stop any error checking from happening on the index bounds used to access elements. Lastly see how we are looping from 0 to ''argc - 1'', the [[For|for]] loop is inclusive of the bounds so ''argc'' were zero then one iteration would still occur which is not what we want here.
[[Category:Tutorials|Functions]]
071e1f87c2a958d2d18c42172fb1ea1328053716
Tutorial - Parallel Constructs
0
221
1212
1211
2019-04-15T15:44:58Z
Polas
1
5 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing how to structure parallel code in Mesham</metadesc>
'''Tutorial number four''' - [[Tutorial_-_Functions|prev]] :: [[Tutorial_-_Shared Memory|next]]
== Introduction ==
In this tutorial we shall look at more advanced parallel constructs as to what were discussed in the [[Tutorial - Hello world|Hello world]] tutorial. There will also be some reference made to the concepts noted in the [[Tutorial - Functions|functions]] and [[Tutorial - Simple Types|simple types]] tutorials too.
== Parallel composition ==
In the [[Tutorial - Hello world|Hello world]] tutorial we briefly saw an example of using parallel composition (||) to control parallelism. Let's now further explore this with some code examples:
#include <io>
#include <string>
#include <parallel>
function void main() {
{
var i:=pid();
print("Hello from PID "+itostring(i)+"\n");
} || {
var i:=30;
var f:=20;
print("Addition result is "+itostring(i+f)+"\n");
};
};
Which specifies two blocks of code, both running in parallel (two processes), the first will display a message with the process ID in it, the other process will declare two [[Int]] variables and display the result of adding these together. This approach; of specifying code in blocks and then using parallel composition to run the blocks in parallel, on different processes, is a useful one. As a further exercise try rearranging the blocks and view the value of the process ID reported, also add further parallel blocks (via more parallel composition) to do things and look at the results.
=== Unstructured parallel composition ===
In the previous example we structured parallel composition by using blocks, it is also possible to run statements in parallel using this composition, although it is important to understand the associativity and precedence of parallel composition and sequential composition when doing so.
#include <io>
#include <string>
#include <parallel>
function void main() {
var i:=0;
var j:=0;
var z:=0;
var m:=0;
var n:=0;
var t:=0;
{i:=1;j:=1||z:=1;m:=1||n:=1||t:=1;};
print(itostring(pid())+":: i: "+itostring(i)+", j: "+itostring(j)+", z: "+itostring(z)
+", m: "+itostring(m)+", n: "+itostring(n)+", t: "+itostring(t)+"\n");
};
This is a nice little code to help figure out what, for each process, is being run. You can further play with this code and tweak it as required. Broadly, we are declaring all the variables to be [[Int|Ints]] of zero value and then executing the code in the { } code block followed by the [[Print|print]] statement on all processes. Where it gets interesting is when we look at the behaviour inside the code block itself. The assignment ''i:=1'' is executed on all processes, sequentially composed with the rest of the code block, ''j:=1'' is executed just on process 0, whereas at the same time the value of 1 is written to variables ''z'' and ''m'' on process 1. Process 2 performs the assignment ''n:=1'' and lastly process 3 assigns 1 to variable ''t''. From this example you can understand how parallel composition will behave when unstructured like this - as an exercise add additional code blocks (via braces) and see how that changes the behaviour my specifying explicitly what code belongs where.
The first parallel composition will bind to the statement (or code block) immediately before it and then those after it - hence ''i:=1'' is performed on all processes but those sequentially composed statements after the parallel composition are performed just on one process. Incidentally, if we removed the { } braces around the unstructured parallel block, then the [[Print|print]] statement would just be performed on process 3 - if it is not clear why then have an experiment and reread this section to fully understand.
== Allocation inference ==
If we declare a variable to have a specific allocation strategy within a parallel construct then this must be compatible with the scope of that construct. For example:
function void main() {
group 1,3 {
var i:Int::allocated[multiple[]];
};
};
If you compile the following code, then it will work but you get the warning ''Commgroup type and process list inferred from multiple and parallel scope''. So what does this mean? Well we are selecting a [[Group|group]] of processes (in this case processes 1 and 3) and declaring variable ''i'' to be an [[Int]] allocated to all processes; however the processes not in scope (0 and 2) will never know of the existence of ''i'' and hence can never be involved with it in any way. Even worse, if we were to synchronise on ''i'' then it might cause deadlock on these other processes that have no knowledge of it. Therefore, allocating ''i'' to all processes is the wrong thing to do here. Instead, what we really want is to allocate ''i'' to a group of processes that in parallel scope using the [[Commgroup|commgroup]] type, and if omitted the compiler is clever enough the deduce this, put that behaviour in but warn the programmer that it has done so.
If you modify the type chain of ''i'' from ''Int::allocated[multiple[]]'' to ''Int::allocated[multiple[commgroup[]]]'' and recompile you will see a different warning saying that it has just inferred the process list from parallel scope (and not the type as that is already there.) Now change the type chain to read ''Int::allocated[multiple[commgroup[1,3]]]'' and recompile - see that there is no warning as we have explicitly specified the processes to allocate the variable to? It is up to you as a programmer and your style to decide whether you want to explicitly do this or put up with the compiler warnings.
So, what happens if we try to allocate variable ''i'' to some process that is not in parallel scope? Modify the type chain of ''i'' to read ''Int::allocated[multiple[commgroup[1,2]]]'' and recompile - you should see an error now that looks like ''Process 2 in the commgroup is not in parallel scope''. We have the same protection for the single type too:
function void main() {
group 1,3 {
var i:Int::allocated[single[on[0]]];
};
};
If you try to compile this code, then you will get the error ''Process 0 in the single allocation is not in parallel scope'' which is because you have attempted to allocate variable ''i'' to process 0 but this is not in scope so can never be done. Whilst we have been experimenting with the [[Group|group]] parallel construct, the same behaviour is true of all parallel structural constructs.
== Nesting parallelism ==
Is currently disallowed, whilst it can provide more flexibility for the programmer it makes for a more complex language from the designer and compiler writer point of view.
function void main() {
var p;
par p from 0 to 3 {
proc 0 {
skip;
};
};
};
If you compile the following code then it will result in the error ''Can not currently nest par, proc or group parallel blocks''.
== Parallelism in other functions ==
Up until this point we have placed our parallel constructs within the ''main'' function, but there is no specific reason for this.
#include <io>
function void main() {
a();
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile and run the following code then you will see that processes 1 and 3 display the message to standard output. An an exercise modify this code to include further functions which have their own parallel constructs in and call them from the ''main'' or your own functions.
An important point to bear in mind with this is that ''a'' is now a parallel function and there are some points to consider with this. Firstly, all parallel constructs ([[Par|par]], [[Proc|proc]] and [[Group|group]) are blocking calls - hence all processes must see these, so to avoid deadlock all processes must call the function ''a''. Secondly, as discussed in the previous section, remember how we disallow nested parallelism? Well we relax this restriction here '''but''' it is still not safe
#include <io>
function void main() {
var p;
par p from 0 to 3 {
a();
};
};
function void a() {
group 1,3 {
print("Hello from 1 or 3\n");
};
};
If you compile the following code then it will work, but you will get the warning ''It might not be wise calling a parallel function from within a parallel block''. Running the executable will result in the correct output, but changing a ''3'' to a ''2'' in the [[Par|par]] loop will result in deadlock. Therefore it is best to avoid this technique in practice.
[[Category:Tutorials|Parallel Constructs]]
0bb1bd17c7e11c7496a29db6d4112a6b4d7328e7
Tutorial - Shared Memory
0
222
1224
1223
2019-04-15T15:44:59Z
Polas
1
11 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing basic, shared remote memory, communication in Mesham</metadesc>
'''Tutorial number five''' - [[Tutorial_-_Parallel Constructs|prev]] :: [[Tutorial_-_Parallel Types|next]]
== Introduction ==
In this tutorial we will be looking at using the, default, shared memory model for simple communication involving a single variable. It is important to understand the memory model behind this form of communication and when variables will be subject to communication.
== Shared memory model ==
Mesham follows the Logic Of Global Synchrony (LOGS) model of shared memory. This actually sounds much more formidable than it is in reality and follows a simple number of practical rules. Each variable can be thought of as starting in one state, finishing in another (if and when the code terminates) and throughout the program's life be in a number of intermediate states. We go from one intermediate state to the next when [[Sync|synchronisation]] is used and this can be thought of as barrier synchronisation.
== My first communication ==
Communication depends on exactly where variables are allocated to which in itself is driven by types.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[]];
a:=1;
proc 1 {
a:=99;
};
sync a;
proc 0 {
print(itostring(a)+"\n");
};
};
If you compile and run the following code then you will see the output ''1'' - so lets have a look what exactly is going on here. Variable ''a'' is allocated to all processes, all processes set the value to be ''1'', process one will then change the value to be ''99'', we do a barrier synchronisation on ''a'' and then process zero will display its value. Because ''a'' is allocated to all processes (via the [[Multiple|multiple]] type), assignment and access is always local - i.e. in this case, process one modifying the value will have no impact on ''a'' held on other processes such as process zero.
So we have seen that variables allocated to all processes always involve '''local''' access and assignment, let's do something a bit more interesting - change the ''multiple[]'' to be ''single[on[0]]'' and recompile and run the code. Now the output is different and it displays ''99''. That is because if a variable is allocated just to a specific process and another one reads/writes to it, then this will involve remote access to that memory (communication.) Let's experiment further with this, remove ''a'' from the [[Sync|sync]] statement (line 10) and recompile and rerun, the result should be the same, ''99'' displayed. If we specify a variable with the [[Sync|sync]] keyword then this will barrier synchronise just on that variable, the [[Sync|sync]] by itself will barrier synchronise on '''all''' variables which require it. Ok then, now comment out the [[Sync|sync]] keyword entirely and recompile and run the code - see it now displays ''1'' again? This is because we can only guarantee that a value has been written into some remote memory after barrier synchronisation has occurred.
We have seen that if a variable is allocated to all processes then read/write will always be a local operation but if a variable is allocated just to a single process then read/write will be a remote operation on every other process.
=== Further communication ===
#include <io>
#include <string>
function void main() {
var a:Int::allocated[single[on[0]]];
var b:Int::allocated[multiple[]];
proc 0 {
a:=1;
};
sync ;
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
The code snippet above is similar to the first one but with some important differences. We are declaring two variables; the first, ''a'', is held on process zero only whereas the second, ''b'', is allocated to all processes. Process zero then alone (via the [[Proc|proc]] statement will modify ''a'' locally (as it is held there). We then [[Sync|synchronise]] all processes to ensure process zero has updated ''a'' and then process one will obtain the value of ''a'' from process zero and pop this into its own ''b'', it then completes all operations involving variable ''a'' and displays its value of ''b''. Stepping back a moment, what we are basically doing here is getting some remote data and copying this into a local variable, the result is that the value held by process zero in ''a'' will be retrieved into ''b'' on process one. If you remove the [[Sync|sync]] statement on line 10 then you might see that instead of displaying the value ''1'', ''0'' is displayed (the default [[Int]] initialisation value.) This is because synchronisation must occur to ensure process zero has update ''a'' before process one reads from it, equally the last synchronisation statement completes RMA and if you remove this then likely the value in ''b'' will not have updated.
#include <io>
#include <string>
function void main() {
var a:Int::allocated[multiple[commgroup[0,2]]];
var b:Int::allocated[single[on[1]]];
group 0, 2 {
a:=2;
b:=a;
};
sync;
proc 1 {
print(itostring(b)+"\n");
};
};
The above illustrates a [[Commgroup|communication group]], as this has to be provided with [[Multiple]] the variable ''a'' is private to each process that it is allocated on. Here processes zero and two update their own (local) version of ''a'' and then remotely write to variable ''b'' held on process one, both processes will send values over but as these are the same then there is no conflict. [[Sync|synchronisation]] is used to complete the RMA and ensure process one awaits updates to its ''b'' which it then displays.
== Single to single ==
If we have two variables which are allocated to single processes then any assignment involving these will either result in local or remote access depending on whether they are on the same process or not.
#include <io>
#include <string>
var processOneAllocation:=0;
var processTwoAllocation:=0;
function void main() {
var a:Int::allocated[single[on[processOneAllocation]]];
var b:Int::allocated[single[on[processTwoAllocation]]];
proc processTwoAllocation {
b:=23;
a:=b;
};
//sync;
group processOneAllocation {
print(itostring(a)+"\n");
};
};
In the example above we are allocating variables ''a'' and ''b'' both on process zero, we are then performing an assignment ''a:=b'' at line 12 which, because the variables are on the same process is local and occurs immediately. Now, change ''processOneAllocation'' to be equal to ''1'' and uncomment the [[Sync|sync]] keyword at line 14 and recompile and run. See the same value - but now process 0 is writing the value of ''b'' into the remote memory of ''a'' and if you comment out the [[Sync|sync]] keyword then a value of ''0'' will be reported. The values of ''processOneAllocation'' and ''processTwoAllocation'' can be anything - if they are the same here then it is local and if not then remote.
== Limits of communication ==
Currently all variables declared multiple (including communication groups) should be considered private, it is only variables declared single which can be accessed by another process.
[[Category:Tutorials|Shared Memory]]
f6e81b749670b86f85f99bb159b073f3df2d7db7
Tutorial - Arrays
0
223
1236
1235
2019-04-15T15:44:59Z
Polas
1
11 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing collecting data together via arrays in Mesham</metadesc>
'''Tutorial number seven''' - [[Tutorial_-_Parallel Types|prev]] :: [[Tutorial_-_RMA|next]]
== Introduction ==
An [[Array|array]] is a collection of element data in one or more dimensions and is a key data structure used in numerous codes. In this tutorial we shall have a look at how to create, use and communicate arrays.
== Simple arrays ==
function void main() {
var a:array[Int,10];
};
The above code will declare variable ''a'' to be an [[Array|array]] of ten [[Int|Ints]] which are indexed 0 to 9 inclusively. In the absence of further information a set of default types will be applied which are; [[Heap|heap]], [[Onesided|onesided]], [[Row|row]], [[Allocated|allocated]], [[Multiple|multiple]]. Arrays, when allocated to the heap, are subject to garbage collection which will remove them when no longer used.
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
var i;
for i from 0 to 9 {
a[i]:=i;
};
for i from 0 to 9 {
print(itostring(a[i]));
};
};
The code snippet demonstrates writing to and reading from elements of an array, if you compile and run this code then you will see it displays values ''0'' to ''9'' on standard output. We can access an element of an array (for reading or writing) via the ''[x]'' syntax, where ''x'' is either an [[Int]] constant or variable.
=== Arrays and functions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,10];
fill(a);
display(a);
};
function void fill(var a:array[Int,10]) {
var i;
for i from 0 to 9 {
a[i]:=i;
};
};
function void display(var a:array[Int]) {
var i;
for i from 0 to 9 {
print(itostring(a[i]));
};
};
This code demonstrates passing arrays into functions and there are a couple of noteworthy points to make here. First, because an [[Array|array]] is, by default, allocated to the [[Heap|heap]], as discussed in the [[Tutorial - Functions|functions tutorial]], this is pass by reference. Hence modifications made in the ''fill'' function do affect the original data allocated in the ''main'' function, which is what we want here. Secondly, see that the type we provide to the ''display'' function does not have any explicit size associated with the array? It is not always possible to know the size of an array that is being passed into a function, so Mesham allows for the type of a function argument to be specified with a size but with two restrictions; first it must be a one dimensional array and secondly no compile time bounds checking can take place.
=== Multi dimensional arrays ===
Arrays can be any number of dimensions just by adding extra bounds into the type declaration:
function void main() {
var a:array[Int,16,8];
a[0][1]:=23;
};
This code illustrates declaring variable ''a'' to be an [[Array|array]] of two dimensions; the first of size 16 and the second 8. By default all allocation of arrays is [[Row|row major]] although this can be overridden. Line three illustrates writing into an element of a two dimensional array.
== Communication of arrays ==
Arrays can be communicated entirely, per dimension or by individual elements.
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
a[0][1]:=28;
};
sync;
proc 1 {
print(itostring(a[0][1])+"\n");
};
};
In this example process 0 writes to the (remote) memory of process 1 which contains the array, synchronisation occurs and then the value is displayed by process 1 to standard output.
=== Communicating multiple dimensions ===
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[2][i]:=i;
};
};
sync;
proc 1 {
var i;
for i from 0 to 7 {
print(itostring(a[2][i])+"\n");
};
};
};
Compile and run this code - look at the output, is it just a list of the value ''8'', not what you expected? Well in this example the values copied across may be any number between 0 and 8 because at each assignment ''a[2][i]:=i;'' we are setting the remote value of ''a'' at this specific index to be the value held in ''i''. However, this communication does not guarantee to complete until the [[Sync|synchronisation]] and at that point the value of ''i'' is ''8'' (the loop iterates up to and including 7, after which ''i'' is incremented but found to be too large and the loop ceases.) It is something to be aware of - the value of a variable being remotely written ''matters'' until after the corresponding synchronisation.
There are a number of ways in which we could change this code to make it do what we want, the easiest is to use a temporary variable allocated on the heap (and will be garbage collected after the synchronisation.) To do this, replace the ''proc 0'' block with:
proc 0 {
var i;
for i from 0 to 7 {
var m:Int::heap;
m:=i;
a[2][i]:=m;
};
};
This is an example of writing into remote memory of a process and modifying multiple indexes of an array (in any dimension.)
=== Communicating entire arrays ===
#include <io>
#include <string>
function void main() {
var a:array[Int,20]::allocated[single[on[1]]];
var b:array[Int,20]::allocated[single[on[2]]];
proc 1 {
var i;
for i from 0 to 19 {
a[i]:=1;
};
};
b:=a;
sync;
proc 2 {
var i;
for i from 0 to 19 {
print(itostring(b[i])+"\n");
};
};
};
This code example demonstrates populating an array held on one process, assigning it in its entirety to an array on another process (line 13), synchronising and then the other process reading out all elements of that target array which has just been remotely written to.
== Row and column major ==
By default arrays are row major allocated using the [[Row|row]] type. This can be overridden to column major via the [[Col|col]] type.
function void main() {
var a:array[Int,16,8]::allocated[col::multiple];
};
will allocate array ''a'' to be an [[Int]] array of 16 by 8, allocated to all processes using column major memory allocation.
For something more interesting let's have a look at the following code:
#include <io>
#include <string>
function void main() {
var a:array[Int,16,8];
var i;
var j;
for i from 0 to 15 {
for j from 0 to 7 {
a[i][j]:=(i*10) + j;
};
};
print(itostring(a::col[][14][7]));
};
By default variable ''a'' is [[Row|row major]] allocated and we are filling up the array in this fashion. However, in the [[Print|print]] statement we are accessing the indexes of this array in a [[Col|column major]] fashion. Try changing [[Col|col]] to [[Row|row]] or remove it altogether to see the difference in value. Behind the scenes the types are doing to appropriate memory look up based upon their meaning and the indexes provided. Mixing memory allocation in this manner can be very useful for array transposition amongst other things. ''Exercise:'' Experiment with the [[Col|col]] and [[Row|row]] types and also see what effect it has placing them in the type chain of ''a'' like in the previous example.
[[Category: Tutorials|Arrays]]
71078da30e379159816c2afd63b2f66de4097383
Tutorial - Parallel Types
0
224
1245
1244
2019-04-15T15:45:00Z
Polas
1
8 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing the use of types for more advanced parallelism in Mesham</metadesc>
'''Tutorial number six''' - [[Tutorial_-_Shared Memory|prev]] :: [[Tutorial_-_Arrays|next]]
== Introduction ==
Up until this point we have been dealing with the default shared memory model of communication. Whilst this is a simple, safe and consistent model it can have a performance penalty associated with it. In this tutorial we shall look at overriding the default communication, via types, to a more message passing style.
== A channel ==
#include <io>
#include <string>
function void main() {
var a:Int::channel[1,2];
var b:Int::allocated[single[on[2]]];
proc 1 {
a:=23;
};
proc 2 {
b:=a;
print(itostring(b)+"\n");
};
};
In this example we are using variable ''a'' as a [[Channel|channel]], between processes ''1'' and ''2''. At line 8, process 1 writes the value ''23'' into this channel and at line 11, process 2 reads that value out of the channel. Note that [[Channel|channels]] are unidirectional (i.e. process 2 could not write to process 1 in this example.)
=== Pipes ===
#include <io>
#include <string>
function void main() {
var a:Int:: pipe[1,2];
var b:Int;
var p;
par p from 0 to 2 {
var i;
for i from 0 to 9 {
var master:=i%2==0?1:2;
var slave:=i%2==0?2:1;
if (p==master) a:=i;
if (p==slave) {
b:=a;
print(itostring(p)+": "+itostring(b)+"\n");
};
};
};
};
This code demonstrates using the [[Pipe|pipe]] type for bidirectional point to point communication. If you change the [[Pipe|pipe]] to a [[Channel|channel]] then you will see that instead, only process 1 may send and only 2 may receive.
== Extra parallel control ==
By default the channel type is a blocking call, we have a number of fine grained types which you can use to modify this behaviour.
#include <io>
#include <string>
function void main() {
var a:Int::channel[0,1]::nonblocking[];
var b:Int;
proc 0 {
a:=23;
sync a;
};
proc 1 {
b:=a;
sync a;
print(itostring(b)+"\n");
};
};
In this code we are using the [[Nonblocking|nonblocking]] type to override the default blocking behaviour of a [[Channel|channel]]. The type is connected to the [[Sync|sync]] keyword such that it will wait at that point for outstanding communication to complete. Try experimenting with the code to understand the differences these types make.
== Collective communication ==
Mesham has a number of collective communication types, here we are just going to consider the [[Reduce|reduce]] and [[Broadcast|broadcast]] here.
=== A broadcast ===
The broadcast type allows us to explicitly specify that a communication is to involve all processes (in current parallel scope.)
#include <io>
#include <string>
function void main() {
var a:Int;
a::broadcast[2]:=23;
print(itostring(a)+"\n");
};
In this example we are declaring ''a'' to be a normal [[Int]] variable, then on line 6 we are coercing the [[Broadcast|broadcast]] type with the existing type chain of ''a'' just for that assignment and telling the type that process ''2'' is the root process. The root process is the one that drives the broadcast itself, i.e. here process 2 is sending the value ''23'' to all other processes. Then on line 7 we are just using ''a'' as a normal program variable to display its value. This use of types is actually quite a powerful one; we can append extra types for a specific expression and then after that expression has completed the behaviour is back to what it was before.
=== A reduction ===
Another very common parallel operation is to combine values from a number of processes and, applying some operation, [[Reduce|reduce]] this to a resulting value.
#include <io>
#include <string>
function void main() {
var p;
par p from 0 to 19 {
var a:Int;
a::reduce[0,"sum"]:=p;
if (p==0) print(itostring(a)+"\n");
};
};
This code will combine all of the values of each process's ''p'' onto process 0 and sum them all up. Multiple operations are supported and are listed in the [[Reduce|reduce type documentation]]
== Eager one sided communication ==
Whilst normal one sided communications follow the Logic Of Global Synchrony (LOGS) model of shared memory communication and complete only when a synchronisation is issued, it is possible to override this default behaviour to complete communications at the point of issuing the assignment or access instead.
#include <io>
#include <string>
function void main() {
var i:Int::eageronesided::allocated[single[on[1]]];
proc 0 { i:=23; };
sync;
proc 1 { print(itostring(i)+"\n"); };
};
Compile and run this fragment, see that the value ''23'' has been set without any explicit synchronisation on variable ''i''. Now remove the eager bit of the [[Eageronesided|eager one sided type]] (or remove it altogether, remember [[onesided]] is the default communication) and see that, without a synchronisation the value is 0. You can add the [[Sync|sync]] keyword in after line 6 to complete the normal one sided call. We require a synchronisation between the proc calls here to ensure that process 1 does not complete before 0 which sets the value.
[[Category:Tutorials|Parallel Types]]
d77cb9304855c7a7af40589a701d4ffc96f995ec
The Compiler
0
225
1261
1260
2019-04-15T15:45:00Z
Polas
1
15 revisions imported
wikitext
text/x-wiki
== Overview ==
The core translator produces ANSI standard C99 C code which uses the Message Passing Interface (version 2) for communication. Therefore, on the target machine, an implementation of MPI, such as OpenMPI, MPICH or a vendor specific MPI is required and will all work with the generated code. Additionally our runtime library (known as Idaho) needs to be also linked in. The runtime library performs three roles - firstly it is architecture specific (and versions exist for different flavours of Linux) as it contains any none portable code which is needed and is also optimised for specific platforms. Secondly the runtime library contains functions which are often called and would increase the size of generated C code. Lastly, by placing certain functionality in this library means that if one wishes to tune or modify behaviour for a specific platform then it can be done at the library level rather than having to recompile all existing Mesham codes. The standard runtime library requires the [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Boehm-Demers-Weiser conservative garbage collector (libgc)].
<center>[[File:Meshamworkflow.png|500px]]</center>
The resulting executable can be thought of as any normal executable, and can be run in a number of ways. In order to allow for simplicity the user can run their program just with one process, the program will automatically spawn the number of processors required. Secondly the executable can be run with the exact number of processes needed and this may be instigated via a process file or queue submission program. It should be noted that, as long as your MPI implementation supports multi-core (and the majority of them do) then the code can be executed properly on a multi core machine, often with the processes wrapping around the cores (for instance 2 processes on 2 cores is 1 process on each, 6 processes on 2 cores is 3 processes on each etc...)
Whilst earlier versions of the MPICH daemon allowed for the user to simply run their executable and the daemon would pick it up, ''Hydra'' which is the latest MPICH process manager, requires you to run it via the ''mpiexec'' command. We suggest ''mpiexec -np 1 ./name'', where ''name'' is the name of your executable and the code will spawn the necessary number of processes.
== Compilation in more detail ==
The compiler itself is contained within a number of different phases. Firstly, your Mesham code goes through a preprocessor which will expand the directives (such as [[Include|include]]) into Mesham code. It is at the preprocessor stage that the standard function libraries are made available to the code if the programmer has included them. The code is then fed into the core compiler which contains the keywords and general rules of the language but does not contain any types. These types exist in a separate library and behaviour is called via an API, from the core compiler into the appropriate types.
<center>[[File:Oubliettelandscape.png|500px]]</center>
The [[Oubliette]] core produces non human readable ANSI C99 code as an intermediate representation (IR), which is then fed into an applicable C compiler. This stage is also performed by the compiler - although it is possible to dump out this C code and manually compiler if desired.
== Command line options ==
* '''-o [name]''' ''Select output filename''
* '''-I [dir]''' ''Include the directory in the preprocessor path''
* '''-c''' ''Output C code only to a file''
* '''-cc''' ''Output C code only to stdout''
* '''-e''' ''Display C compiler errors and warnings also''
* '''-g''' ''Produce executable that is debuggable with gdb and friends''
* '''-s''' ''Silent operation (no warnings)''
* '''-summary''' ''Produce a summary of compilation''
* '''-pp''' ''Output preprocessed result onto to file''
* '''-f [args]''' ''Forward arguments to the C compiler''
* '''-static''' ''Statically link against the runtime library''
* '''-shared''' ''Dynamically link against the runtime library (default)''
* '''-env''' ''Display compiler environment variable information''
* '''-h''' ''Display compiler help message''
* '''-v''' ''Display compiler version information''
* '''-vt''' ''Display compiler and type version information''
* '''-vtl''' ''Display information about currently loaded type libraries''
== Environment variables ==
The Mesham compiler requires certain environment variables to be set in order to select certain options such as the C compiler and location of dependencies. It is not necessarily required to set all of these - a subset will be fine if that is appropriate to your system.
* '''MESHAM_SYS_INCLUDE''' ''The location of the mesham function include files, separated via ;''
* '''MESHAM_INCLUDE''' ''The optional location of any additional include files, separated via ;''
* '''MESHAM_C_COMPILER''' ''The C compiler to use, mpicc is a common choice''
* '''MESHAM_C_COMPILER_ARGS''' ''Optional arguments to supply to the C compiler, for instance optimisation flags''
* '''MESHAM_C_INCLUDE''' ''The location of header files for the C compiler to include, specifically mesham.h, separated via ;''
* '''MESHAM_C_LIBRARY''' ''The location of libraries for the C compiler to link against, specifically the runtime library, separated via ;''
* '''MESHAM_TYPE_EXTENSIONS''' ''The location of dynamic (.so) type libraries to load in. If not set then no extension type libraries will be loaded''
It is common to set these system variables in the ''bashrc'' script, which is commonly in your home directory. To do so then something like
export MESHAM_SYS_INCLUDE=/usr/include/mesham
export MESHAM_C_INCLUDE=$HOME/mesham/idaho
export MESHAM_C_LIBRARY=$HOME/mesham/idaho
export MESHAM_C_COMPILER=mpicc
Would set these four variables to those appropriate values, obviously change the values as required.
== Executable options ==
Once compiled, the resulting executable provides for a number of command line options which reports details of the runtime environment that it will operate under. These will only be checked if the executable is run with one process.
* '''--mesham_p''' ''Displays the minimum number of processes required to run the code''
* '''--mesham_c''' ''Summary information about the communications layer, such as MPI, being used to link the processes''
* '''--mesham_v''' ''Displays version information about the runtime library and also the compiled executable''
e1bb073ab67ce984e4966a754e35cd809f0ebe80
File:Meshamworkflow.png
6
226
1263
1262
2019-04-15T15:45:00Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Workflow of the oubliette Mesham compiler
2f12daa92ed9113e2b742d63a1005e7a62142360
1388
1263
2019-04-15T15:54:50Z
Polas
1
Polas uploaded [[File:Meshamworkflow.png]]
wikitext
text/x-wiki
Workflow of the oubliette Mesham compiler
2f12daa92ed9113e2b742d63a1005e7a62142360
File:Oubliettelandscape.png
6
227
1265
1264
2019-04-15T15:45:01Z
Polas
1
1 revision imported
wikitext
text/x-wiki
Oubliette landscape
6026efe464b9feb2efb09a99e374f9bc02b73847
1389
1265
2019-04-15T15:59:58Z
Polas
1
Polas uploaded [[File:Oubliettelandscape.png]]
wikitext
text/x-wiki
Oubliette landscape
6026efe464b9feb2efb09a99e374f9bc02b73847
File:Oubliette.png
6
228
1268
1267
2019-04-15T15:45:01Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
Oubliette Mesham logo
f65cd21ac0c4b7ae1f8443f707ffdcd41ef126cb
1393
1268
2019-04-15T16:03:48Z
Polas
1
Polas uploaded [[File:Oubliette.png]]
wikitext
text/x-wiki
Oubliette Mesham logo
f65cd21ac0c4b7ae1f8443f707ffdcd41ef126cb
Download 1.0
0
229
1291
1290
2019-04-15T15:45:02Z
Polas
1
22 revisions imported
wikitext
text/x-wiki
<metadesc>Download the latest version of the Mesham type oriented parallel programming language</metadesc>
{{Applicationbox|name=Mesham compiler 1.0|author=[[User:polas|Nick Brown]]|desc=The latest release of the Mesham compiler|url=http://www.mesham.com|image=oubliette.png|version=1.0.0_411|released=August 2013}}
== Introduction ==
This is the latest version of the Mesham compiler and is based upon the language as described [[Specification|here]] and documented on this website. The compiler has been entirely rewritten from scratch and this line of compiler (version 1.0 and upwards) is known as at the [[Oubliette]] line to distinguish it from the previous versions.
Version 1.0.0 is currently an alpha release and as such should be considered experimental. Please keep checking back for later versions which will be released as we fix bugs and add features.
The Mesham compiler and runtime library are compatible with x86 (64 and 32 bit) Linux only, if you wish to use Mesham on a Windows operating system then you will need to download an [[Download_0.41_beta|older version]].
== Download ==
* All components (compiler, runtime library, libgc) - download 64 bit '''[http://www.mesham.com/downloads/mesham64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/mesham32.zip here]'''
* Latest compiler version: 1.0.0_411 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/oubliette64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/oubliette32.zip here]'''
* Latest runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtl64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtl32.zip here]'''
* Experimental thread based runtime library version: 1.0.03 released 16th August 2013 - download 64 bit '''[http://www.mesham.com/downloads/rtlthreads64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/rtlthreads32.zip here]'''
* Conservative garbage collector (libgc) version 7.2 - download 64 bit '''[http://www.mesham.com/downloads/libgc64.zip here]''' and 32 bit '''[http://www.mesham.com/downloads/libgc32.zip here]'''
''If you are unsure whether you are running under a 32bit or 64bit system, then issue uname -m, the result of x86-64 means 64 bit, any other value such as i686 is 32 bit.''
== Prerequisites ==
In order to run and compile Mesham code you need to have an implementation of MPI (version 3) and a C compiler. We suggest '''MPICH''' and '''GCC''' which are available in source form, binary form or most systems make them available via their package manager (i.e. apt-get.) Refer to your system documentation for the best way to get these packages if you do not already have them installed.
If you are using the experimental thread based runtime library then MPI is not required, the thread based RTL uses pthreads which is usually already installed.
== Installation Instructions ==
Whilst it is a manual installation procedure, the good news is that this is very simple and will be elementary to anyone familiar with Linux.
It is suggested to download ''all components'' which will provide you with the compiler, runtime library and libgc. Unpack the archive and place the ''mcc'' (whihc is the main compiler) executable in your chosen location. It is suggested to either add the location to your path environment variable or add a symbolic link from the ''/usr/bin'' directory to the ''mcc'' binary so that you can call it regardless of the working directory.
Next we need to set up some environment variables to tell the compiler exactly where to find some different aspects. We can set these via the command ''export varible=value'' where ''variable'' is the name of the environment variable you wish to set and ''value'' is the value to set it to. The first environment variable, '''MESHAM_C_COMPILER''', decides which C compiler to use - we suggest mpicc and if you agree then issue ''export MESHAM_C_COMPILER=mpicc''.
Next we are going to set '''MESHAM_SYS_INCLUDE''' which points to the Mesham system include files (supplied with the compiler and all archive in the ''includes'' directory.) Set this variable to point to the directory containing these .mesh files. '''MESHAM_C_INCLUDE''' needs to point to the directory containing the ''mesham.h'' header file and will be used by the C compiler. This, along with the runtime library, is supplied in the ''rtl'' directory. Lastly, the '''MESHAM_C_LIBRARY''' should point to the directory containing the Mesham runtime library and also the libgc library (this can be in the same directory or you can separate the values the via '';''.) It it suggested to add these exports to your ''.bashrc'' script to avoid excessive typing.
An optional environment variable is the '''MESHAM_C_COMPILER_ARGS''' variable, which allows for specific flags to be provided to the underlying C compiler on each run regardless of the Mesham code or explicit user command line arguments. This is useful to apply certain machine specific optimisations.
If you do not wish to set these last two environment variables then alternatively you can symlink ''libmesham.so'' and ''libgc.so'' into your ''/usr/lib'' directory and the ''mesham.h'' header file into ''/usr/include''.
Now we have done this we are good to go; issue ''mcc -env'' which will display the environment variables.
== Testing the compiler ==
Copy the following code into test.mesh, then compile via ''mcc -e test.mesh'' (the -e flag will display any errors reported by the C compiler.) All being well an executable ''test'' will appear, run this via ''mpiexec -np 1 ./test'' after ensuring your favourite MPI process manager is running.
#include <io>
#include <string>
#include <parallel>
function void main() {
group 0,1,2,3 {
print("Hello from process "+itostring(pid())+"\n");
};
};
All being well, you should see the output (but the order of the lines will vary):
Hello from process 0
Hello from process 2
Hello from process 3
Hello from process 1
e37a1b609f623fdbb19d2101635d2fe2c3db8f1e
File:Robot-cleaner.jpg
6
230
1294
1293
2019-04-15T15:45:02Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
1390
1294
2019-04-15T16:03:09Z
Polas
1
Polas uploaded [[File:Robot-cleaner.jpg]]
wikitext
text/x-wiki
da39a3ee5e6b4b0d3255bfef95601890afd80709
Download libgc
0
231
1298
1297
2019-04-15T15:45:02Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham uses lib GC to garbage collect during execution, download it here</metadesc>
{{Applicationbox|name=Lib GC 7.2|author=Hans Boehm|desc=Garbage collector library used by the Mesham runtime library.|url=http://www.hpl.hp.com/personal/Hans_Boehm/gc/|image=Robot-cleaner.jpg|version=7.2|released=May 2012}}
== Introduction ==
The default runtime library uses the Boehm-Demers-Weiser conservative garbage collector. It allows one to allocate memory, without explicitly deallocating it when it is no longer useful. The collector automatically recycles memory when it determines that it can no longer be otherwise accessed.
== Download ==
We provide a download link '''[http://www.mesham.com/downloads/libgc64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/libgc32.zip 32 bit here]''' to precompiled library versions of this which is all that is required to use Mesham. We suggest you use these provided, precompiled forms as they have been tested with Mesham. It is likely that future versions (later than 7.2) will work fine although they might not necessarily have been tested.
You can access further information, documentation and download the latest source code from the project website [http://www.hpl.hp.com/personal/Hans_Boehm/gc/ here].
0f310d35309597c8cac05f1d1abff381a73eb351
Download rtl 1.0
0
232
1305
1304
2019-04-15T15:45:03Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Mesham type oriented parallel programming language runtime library</metadesc>
{{Applicationbox|name=Runtime library 1.0|author=[[User:polas|Nick Brown]]|desc=The latest runtime library compatible with version 1.0 of the Mesham compiler.|url=http://www.mesham.com|image=Runtimelibrary.png|version=1.0.03|released=August 2013}}
== Runtime Library Version 1.0 ==
Version 1.0 is currently the most up-to-date version of the Mesham runtime library and is required by Mesham 1.0. This version of the library has been re-engineered to support the [[Oubliette]] compiler line and as such is not backwards compatible with older versions.
This line of runtime library is known as the [[Idaho]] line.
== Download ==
You can download the runtime library, '''[http://www.mesham.com/downloads/rtl64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtl32.zip 32 bit here]'''
== Experimental thread based ==
We have created an experimental thread based RTL, where all the programmers parallel processes are represented as threads and all communication implemented using shared memory. By running inside threads, rather than separate processes, this has the benefits of reduced overhead on multi-core machines and no need for an MPI implementation to be installed. Threading is achieved via the pthreads library which is readily available on Linux versions. Your code should run without modification and all of the example code on this website, including the tutorials, have been tested and found to work in the threading mode.
The thread based runtime library can be downloaded, '''[http://www.mesham.com/downloads/rtlthreads64.zip 64 bit here]''' and '''[http://www.mesham.com/downloads/rtlthreads32.zip 32 bit here]'''
== Garbage collector ==
By default you will also need the lib GC garbage collector which can be found [[Download_libgc|here]].
== Instructions ==
Detailed installation, usage and options instructions are included with the library. Additionally these can be found on the [[Download 1.0|download 1.0 package]] page.
1a283a0d3d9621a6a043ea8ec772471d806ba46f
Idaho
0
233
1309
1308
2019-04-15T15:45:03Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
<metadesc>Idaho is the Mesham runtime library</metadesc>
[[File:Runtimelibrary.png|right]]
== Introduction ==
Idaho is the name of the reengineered Mesham runtime library. We have always given parts of the language different nicknames and [[Oubliette]] is the name of the reengineered compiler that requires Idaho. The runtime library is used by a compiled executable whilst it is running and, apart from providing much of the lower level language functionality such as memory allocation, remote memory (communication) management and timing, it also provides the native functions which much of the standard function library requires.
We have designed the system in this manner such that platform specific behaviour can be contained within this library and the intention will be that a version of the library will exist for multiple platforms. Secondly by modifying the library it is possible to tune how the Mesham executables will run, such as changing the garbage collection strategy.
== Abstracting communication ==
All physical parallelism, including communication and process placement, is handled by the lowest level communication layer in the RTL. By changing this layer then we can support and optimise for multiple technologies. Implementations of this layer currently exist which support process based (MPI) parallelism and thread based (pthreads) parallelism. Currently this is delivered via downloading the appropriate runtime library files.
== API ==
The set of functions which Idaho provides can be viewed in the ''mesham.h'' header file. It is intended to release the source code when it is more mature.
9ff09577aa88bf9e5babbe53bdabb995eab90432
Mesham parallel programming language:About
0
234
1311
1310
2019-04-15T15:45:03Z
Polas
1
1 revision imported
wikitext
text/x-wiki
#REDIRECT [[What_is_Mesham]]
46e26242036cdc74c7a0ac7260e0182e1951639d
Mesham parallel programming language:General disclaimer
0
235
1314
1313
2019-04-15T15:45:03Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
= No warranty of any kind =
Mesham makes no guarantee of validity or safety of the information contained or copied from this site. This site contains source code, binary executables and documentation which can be used to in the creation of source code. The information contained here is for research purposes and should be verified by yourself as accurate before use. Any software (source or binary) created by the information contained here or software located at this site has the following disclaimer:
<pre>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
</pre>
We strongly advise that you virus check all downloaded software, regardless of origin, before use.
d8fac0de08466638c8b9e5710856387378662a4d
Mesham parallel programming language:Privacy policy
0
236
1316
1315
2019-04-15T15:45:03Z
Polas
1
1 revision imported
wikitext
text/x-wiki
=Privacy Policy=
Where possible Mesham will attempt to respect your privacy. No information collected will be shared with third parties. This includes such data as server logs and the information not publicly shared by authors and editors. Mesham is located in the United Kingdom, and may be required to compile with legal requests to identify people if they commit illegal activities on this site. Please, no wazes, virus writing, OS exploiting, or links to those types of activities. Please do not add you private information unless you are sure you want it shared as deleting content in the wiki does not permanently remove it. Do not post other peoples private information.
c87a32e1157c0c17605558ea52a9485d68e4afde
Tutorial - Dynamic Parallelism
0
237
1323
1322
2019-04-15T15:45:03Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing dynamic parallelism in Mesham</metadesc>
'''Tutorial number nine''' - [[Tutorial_-_RMA|prev]] :: [[Tutorial_-_Advanced Types|next]]
== Introduction ==
If you are following these tutorials in order then you could be forgiven for assuming that Mesham requires the programmer to explicitly set the number of processes in their code. This is entirely untrue and, whilst structuring your code around this assumption can lead to cleaner code, Mesham supports a dynamic number of processes which is decided upon at runtime. This tutorial will look at how you can use dynamic parallelism and write your code in this manner.
== In its simplest form ==
#include <parallel>
#include <io>
#include <string>
function void main() {
print(itostring(pid())+"\n");
};
Compile the above code and run it with one process, now run it with ten, now with any number you want. See how, even though the code explicitly requires one process, by running with more will just execute that code on all the other processes? There are a number of rules associated with writing parallel codes in this fashion; firstly '''the number of processes can exceed the required number but it can not be smaller''' so if our code requires ten processes then we can run it with twenty, one hundred or even one thousand however we can not run it with nine. Secondly the code and data applicable to these extra processes is all variables allocated [[Multiple|multiple]] and all code which is written SPMD style (i.e. outside of [[Par|par]], [[Group|group]], [[Proc|proc]] and parallel composition.)
== A more complex example ==
So let's have a look at something a bit more complex that involves the default shared memory communication
#include <parallel>
#include <io>
#include <string>
function void main() {
var numberProc:=processes();
var s:array[Int, numberProc]::allocated[single[on[0]]];
s[pid()]:=pid();
sync;
proc 0 {
var i;
for i from 0 to processes() - 1 {
print(itostring(i)+" = "+itostring(s[i])+"\n");
};
};
};
Compile and run this example with any number of processes and look at how the code will handle us changing this number. There are a couple of general points to make about this code; firstly notice that we are still using the [[Proc|proc]] parallel construct of Mesham for process selection (which is absolutely fine to do.) We could have instead done something like ''if (pid()==0)'' which is entirely up to the programmer.
Next, modify variable ''s'' to be on process 2 (and change the [[Proc|proc]] statement to run on this process also. If you recompile and run this code then it will work fine as long as the number of process is greater than the required number (which is 3.) If you were to try and run the code with 2 processes for example then it will give you an error; the only exception to this is that the usual rule applies that if you run it with one process then Mesham will automatically spawn the required number and run over these. However, this illustration raises and important point - how can we (easily) tell how many processes to use? Happily there are two ways, either compile the code using the ''-summary'' flag or run the resulting Mesham executable with the ''--mesham_p'' flag, which will report how many processes that executable expects to be run over.
== Dynamic type arguments ==
Often, when wanting to write parallel code in this manner, you also want to use flexible message passing constructs. Happily all of the message passing override types such as [[Channel|channel]], [[Reduce|reduce]], [[Broadcast|broadcast]] support the provision of arguments which are only known at runtime. Let's have a look at an example to motivate this.
#include <parallel>
#include <io>
#include <string>
function void main() {
var a:=pid();
var b:=a+1;
var c:=a-1;
var c1:Int::allocated[multiple]::channel[a,b];
var c2:Int::allocated[multiple]::channel[c,a];
var t:=0;
if (pid() > 0) t:=c2;
if (pid() < processes() - 1) c1:=t+a;
t:=t+a;
if (pid() + 1 == processes()) print(itostring(t)+"\n");
};
The above code is a prefix sums type algorithm, where each process will send to the next one (whose id is one greater than it) its current id plus all of the ids of processes before it. The process with the largest id then displays the total number result which obviously depends on the number of processes used to run the code. One point to note about this is that we can (currently) only use variables and values as arguments to types, for example if you used the function call ''pid()'' directly in the [[Channel|channel]] type then it would give a syntax error. This is a limitation of the Mesham parser and will be addressed in a future release.
[[Category: Tutorials|Dynamic Parallelism]]
87cef3b5a09feb946464b8866af7063b6092ab3d
Tutorial - Advanced Types
0
238
1328
1327
2019-04-15T15:45:04Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing advanced type features of Mesham</metadesc>
'''Tutorial number ten''' - [[Tutorial_-_Dynamic Parallelism|prev]]
== Introduction ==
Mesham has a number of advanced typing features over and above type chains and type coercion. In this tutorial we will look at some of this, how they might be used and how they can simplify your program code.
== Type Variables ==
The language has a concept of a type variable, which is a, compilation time, programmer defined type representing a more complex type chain. Let's have a look at this in more detail via an example
function void main() {
typevar typeA::=Int::allocated[multiple];
typevar typeB::=String::allocated[single[on[3]]];
var a:typeA;
var b:typeB;
};
In this example we create type type variables called ''typeA'' and ''typeB'' which represent different type chains. Then the actual program variables ''a'' and ''b'' are declared using these type variables. Notice how type assignment is using the ''::='' operator rather than normal program variable assignment which folows '':=''.
function void main() {
typevar typeA::=Int::allocated[multiple];
var a:typeA;
typeA::=String;
var b:typeA;
typeA::=typeA::const;
var c:typeA;
};
This example demonstrates assigning types and chains to existing type variables. At lines two and three we declare the type variable ''typeA'' and use it in the declaration of program variable ''a''. However, then on line five we modify the value of the type variable, ''typeA'' using the ''::='' operator to be a [[String]] instead. Then on line six we declare variable ''b'' using this type variable, which effectively sets the type to be a String. Line eight demonstrates how we can use the type variable in type chain modification and variable ''c'' is a constant [[String]].
'''Note:''' It is important to appreciate that type variables exist only during compilation, they do not exist at runtime and as such can not be used in conditional statements.
== Types of program variables ==
Mesham provides some additional keywords to help manage and reference the type of program variables, however it is imperative to remember that these are static only i.e. only exist during compilation.
=== Currenttype ===
Mesham has an inbuilt [[Currenttype|currenttype]] keyword which will result in the current type chain of a program variable.
a:currenttype a :: const;
a:a::const
In this code snippet both lines of code are identical, they will set the type of program variable ''a'' to be the current type chain combined with the [[Const|const]] type. Note that using a program variable in a type chain such as in the snippet above is a syntactic short cut for the current type (using the [[Currenttype|currenttype]] keyword) and either can be used.
=== Declaredtype ===
It can sometimes be useful to reference or even revert back to the declared type of a program variable later on in execution. To do this we supply the [[Declaredtype|declaredtype]] keyword.
function void main() {
var a:Int;
a:a::const;
a:declaredtype a;
a:=23;
};
This code will compile and work fine because, although we are coercing the type of ''a'' to be that of the [[Const|const]] type at line three, on line four we are reverting the type to be the declared type of the program variable. If you are unsure about why this is the case, then move the assignment around to see when the code will not compile with it.
== An example ==
Type variables are commonly used with [[Record|records]] and [[Referencerecord|referencerecords]]. In fact, the [[Complex|complex]] type obtained from the [[:Category:Maths_Functions|maths library]] is in fact a type variable.
#include <string>
#include <io>
typevar node;
node::=referencerecord[Int, "data", node, "next"];
function void main() {
var i;
var root:node;
root:=null;
for i from 0 to 9 {
var newnode:node;
newnode.data:=i;
newnode.next:=root;
root:=newnode;
};
while (root != null) {
print(itostring(root.data)+"\n");
root:=root.next;
};
};
This code will build up a linked list of numbers and then walk it, displaying each number as it goes. Whilst it is a relatively simple code, it illustrates how one might use type variables to improve the readability of their code. One important point to note is a current limitation in the Mesham parser and that is the fact that we are forced to declare the type variable ''node'' on line four and then separately assign to it at line five. The reason for this is that in this assignment we are referencing back to the ''node'' type variable in the [[Referencerecord|referencerecord]] type and as such it must exist.
== Limitations ==
There are some important limitations to note about the current use of types. Types currently only exist explicitly during compilation - what this means is that you can not do things such as passing them into functions or communicating them. Additionally, once allocation information (the [[Allocated|allocated]] type) and its subtypes have been set then you can not modify this, nor can you change the [[:Category:Element_Types|element type]].
[[Category: Tutorials|Advanced Types]]
1bce0537b1747d60db6fda126b75118db6183104
Exp
0
239
1331
1330
2019-04-15T15:45:04Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This exp(x) function will return the exponent of ''x'' (e to the power of ''x'').
* '''Pass:''' A [[Double]]
* '''Returns:''' A [[Double]] representing the exponent
== Example ==
#include <maths>
function void main() {
var a:=exp(23.4);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:Maths Functions]]
5b1c383b25ca0b99218b7ff4203776b37ebf14c5
MediaWiki:Aboutsite
8
240
1333
1332
2019-04-15T15:45:04Z
Polas
1
1 revision imported
wikitext
text/x-wiki
About Mesham
1a225dd5f20931244854af8a4f66fee7030eca49
Findchar
0
241
1336
1335
2019-04-15T15:45:04Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This findchar(s, c) function will return the index of the first occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findchar(a,'l');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
c27bb1368a0c7d0a9c08293b91676cc2ce9a1196
Findrchar
0
242
1339
1338
2019-04-15T15:45:04Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This findrchar(s, c) function will return the index of the last occurrence of character ''c'' in string ''s''.
* '''Pass:''' A [[String]] and [[Char]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findrchar(a,'l');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
d57a4e8ea32daff716b515558dee3f6cbad338a7
Findstr
0
243
1342
1341
2019-04-15T15:45:04Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This findstr(s, s2) function will return the index of the first occurrence of search string ''s2'' in text string ''s''.
* '''Pass:''' Two [[String|Strings]]
* '''Returns:''' An [[Int]]
* '''Throws:''' The error string ''notfound'' if the character does not exist within the string
== Example ==
#include <string>
function void main() {
var a:="hello";
var c:=findstr(a,'el');
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
8099c46bfc158c7e371111e9ba241e8125e6ab25
Trim
0
244
1344
1343
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Overview ==
This trim(s) function will return a new string where the leading and trailing whitespace of string ''s'' has been removed.
* '''Pass:''' A [[String]]
* '''Returns:''' A [[String]]
== Example ==
#include <string>
#include <io>
function void main() {
var m:=" hello world ";
print(m+"-\n"+trim(m)+"-\n");
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:String Functions]]
227970822be9bf5e65af0f815afb030822c05422
Arraydist
0
245
1346
1345
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Syntax ==
arraydist[integer array]
== Semantics ==
Will distribute data blocks amongst the processes dependant on the integer array supplied. The number of elements in this array must equal the number of blocks. The index of each element corresponds to the block Id and the value at this location the process that it resides upon. For example, the value 5 at location 2 will place block number 2 onto process 5.
== Example ==
function void main() {
var d:array[Int,4];
d[0]:=3;
d[1]:=0;
d[2]:=2;
d[3]:=1;
var a:array[Int,16,16] :: allocated[horizontal[4] :: single[arraydist[d]]];
var b:array[Int,16,16] :: allocated[single[on[1]]];
a:=b;
};
In this example the array is split using horizontal partitioning into 4 blocks, the first block held on process 3, the second on process 0, third on process 2 and lastly the fourth on process 1. In the assignment on line 10 the data in array ''b'' is distributed to the correct blocks which are held on the appropriate processes depending on the array distribution. To change what data goes where one can simply modify the values in array ''d''.
''Since: Version 1.0''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Distribution Types]]
61b333bed902219d29da647466f1f5928bc43884
Template:OneDimPartitionDotOperators
10
246
1348
1347
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Dot operation
! Semantics
|-
| high
| Largest global coordinate wrt a block in specific block dimension
|-
| low
| Smallest global coordinate wrt a block in specific block dimension
|-
| top
| Largest global coordinate in specific block dimension
|-
| localblocks
| Number of blocks held on local process
|-
| localblockid[i]
| Id number of ith local block
|}
6d326885ad7994242be475d9e3848cf090c30bb7
Template:OneDimPartitionCommunication
10
247
1351
1350
2019-04-15T15:45:05Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
There are a number of different default communication rules associated with the one dimensional partitions, based on the assignment ''assigned variable:=assigning variable'' which are detailed below.
{| border="1" cellspacing="0" cellpadding="5" align="center"
! Assigned Variable
! Assigning Variable
! Semantics
|-
| single
| partition
| Gather
|-
| partition
| single
| Scatter
|-
| partition
| partition
| Local copy
|}
As in the last row of the table , if the two partitions are the same type then a simple copy is performed. However, if they are different then an error will be generated as Mesham disallows differently typed partitions to be assigned to each other.
The programmer can also read and write to each element in the partitioned data directly. Either the global coordinates or the block ID its local coordinates can be supplied. This will deduce whether or not the block is on another process, issue any communication as required and complete in that single assignment or access. Because this completes in that expression rather than waiting for a synchronisation, non local data movement is potentially an expensive operation.
af4b91ec71f70f0988bd43c7e4bd941480ae3318
Eageronesided
0
248
1353
1352
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Syntax ==
eageronesided[a,b]
== Syntax ==
eageronesided[]
== Semantics ==
Identical to the [[Onesided]] type, but will perform onesided communication rather than p2p. This form of one sided communication is similar to normal [[Onesided|one sided]] communication but remote memory access happens immediately and is not linked to the synchronisation keyword. By virtue of the fact that RMA access happens immediately means this form of communication is potentially less performant than normal one sided.
== Example ==
function void main() {
var i:Int::eageronesided::allocated[single[on[2]]];
proc 0 {i:=34;};
};
In the above code example variable ''i'' is declared to be an Integer using eager onesided communication on process two only. In line two an assignment occurs on process zero which will write the value, from process zero into the memory held by process two immediately and that value is now available after that line to every other process.
''Since: Version 1.0''
[[Category:Type Library]]
[[Category:Compound Types]]
[[Category:Primitive Communication Types]]
628fe61159f9ccc4aa4db25d4f8f871b09dd72e9
Sleep
0
249
1355
1354
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Overview ==
This sleep(l) function will pause execution for ''l'' milliseconds.
* '''Pass:''' A [[Long]] number of milliseconds to sleep for
* '''Returns:''' Nothing
== Example ==
#include <system>
function void main() {
sleep(1000);
};
''Since: Version 1.0''
[[Category:Function Library]]
[[Category:System Functions]]
0bc3a1aca52f1253f51a5b6fbc0c8a320332c02f
LineNumber
0
250
1359
1358
2019-04-15T15:45:05Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Syntax ==
_LINE_NUMBER
== Semantics ==
Will be substituted in source code by the current line number of that specific file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
ddae2dc85adeebb3128be23b2ffed8bfce3aa1d0
SourceFile
0
251
1361
1360
2019-04-15T15:45:05Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Syntax ==
_SOURCE_FILE
== Semantics ==
Will be substituted in source code by the name of the current source code file, this is useful for debugging and error messages
''Since: Version 1.0''
[[Category:preprocessor]]
795da35dff7714c5b22888b0e2511335684f94d1
Tutorial - RMA
0
252
1368
1367
2019-04-15T15:45:05Z
Polas
1
6 revisions imported
wikitext
text/x-wiki
<metadesc>Tutorial describing RMA of data in Mesham</metadesc>
'''Tutorial number eight''' - [[Tutorial_-_Arrays|prev]] :: [[Tutorial_-_Dynamic Parallelism|next]]
== Introduction ==
The default behaviour in Mesham is for communication involving variables to be performed via Remote Memory Access (RMA.) This is one sided, where data is remotely retrieved or written to a target process by the source. We briefly looked at this in the [[Tutorial_-_Shared_Memory|shared memory tutorial]] and here we build on that to consider the concepts in more depth.
== Data visibility ==
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
var c:Int::allocated[multiple[commgroup[0,1]]];
var d:Int::allocated[single[on[0]];
b:=a;
proc 1 {
c:=a;
};
d:=a;
proc 1 {
d:=a;
};
};
In the code snippet above exactly what communications are occurring (i.e. are processes reading remote data or writing to remote data?) The best way to think about this is via a simple visibility rule; all variables marked multiple (including those with extra commgroup type) are private to the processes that contain them and all variables marked single are publicly visible to all processes. Therefore in the assignment at line 6 each processes will remotely read from ''a'' held on process one and write this into their own local (private) copy of ''b''. At line 8, only process one will write the value of ''a'' (a local copy as ''a'' is held on the same process) into its own local (private) version of ''c'', the value of ''c'' on process zero will remain unchanged. For variables marked single, assignment favours reading the value remotely if possible rather than writing remotely, for instance at line 10 the assignment ''d:=a'' will result in process zero reading the value of ''a'' from process one, but at line 12 the only process that can execute this is process one so this results in a remote write of ''a'' to variable ''d'' held on process zero.
== Synchronisation ==
By default RMA is non-blocking, so that remote reads or writes might complete at any point and need to be synchronised before values are available. This approach is adopted for performance and scalability, such that many reads and/or writes can occur between synchronisation points. The [[Sync|sync]] keyword provides synchronisation in Mesham, there are actually two ways to use this, firstly ''sync'' on its own will result in a barrier synchronisation, where each process will complete all of its outstanding RMA and then wait (barrier) for all other processes to reach that same point. The other use of synchronisation is with a variable for instance ''sync v'' (assuming variable ''v'' already exists) which will ensure all outstanding RMA involving only variable ''v'' will complete - this second use of synchronisation does not involve any form of barrier so is far more efficient. It is fine to synchronise on a variable which has no outstanding RMA communications and in this case the processes will continue immediately.
Completion of outstanding RMA means that all communications have fully completed, i.e. remote writes have completed and the data is visibile on the target process.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
b:=a;
sync b;
};
The code snippet above illustrates a potential question here, based on the assignment ''b:=a'' (which involves RMA) if the programmer wished to synchronise the RMA for this assignment, should they issue ''sync b'' or ''sync a''? The simple answer is that it doesn't matter as for synchronisation an assignment will tie the variables together so that, for instance ''sync b'' will synchronise RMA for variable ''b'', RMA for variable ''a'' and any other tied RMA for both these variables and their own tied variables.
== Eager RMA ==
var a:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var i;
for i from 0 to 7 {
a[i]:=i;
};
sync a;
};
We saw this example previously, where process zero will most likely write out the value of 10 (variable ''i'' after the loop) to all elements of the array, this is because the remote write is issued based on the variable rather than the variable's value. You could instead place the ''sync a'' call directly after the assignment, or alternatively remove this call all together and append the [[Eageronesided|eageronesided]] type to the type chain of variable ''a'' which will ensure the RMA communication and completion is atomic.
== Bulk Synchronous RMA ==
Many of the RMA examples we have seen in these tutorials follow a bulk synchronous approach (similar to fences), where all processes will synchronise, then communicate and then synchronise again before continuing with computation.
function void main() {
var a:Int::allocated[single[on[1]]];
var b:Int::allocated[multiple[]];
proc 1 {
a:=55;
};
sync;
b:=a;
sync;
proc 1 {
a:=15
};
};
Because RMA communication is non-blocking and may complete at any point from issuing the communication up until the synchronisation, in the example here we need two [[Sync|sync]] calls. The first one ensures that process zero doesn't race ahead and issue the remote read before process one has written the value of ''55'' into variable ''a''. The second synchronisation call ensures that process one doesn't then rush ahead and overwrite the value of ''a'' with ''15'' until process zero has finished remotely reading it. If this last assignment (''a:=15'') did not exist then the last synchronisation could be weakened into ''sync b'' (or ''sync a'') which will complete RMA on process zero at that point and process one would be free to rush ahead.
== Notify and wait ==
The bulk synchronous approach is simple but not very scalable, certainly it is possible to play with different synchronisation options (for instance putting them inside the [[Proc|process selection]] blocks) but care must be taken for data consistency. Another approach is to use the [[Notify|notify]] and [[Wait|wait]] support of the parallel function library. The [[Notify|notify]] function will send a notification to a specific process and the [[Wait|wait]] function will block and wait for a notification from a specific process.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[1]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notify(1);
};
proc 1 {
wait(0);
var i;
for i from 0 to 9 {
print(itostring(j[i])+"\n");
};
};
};
In the example here process zero will issue a remote write to variable ''j'' (held on process one), then synchronise (complete) this RMA before sending a notification to process one. Process one will block waiting for a notification from process zero, and once it has received a notification will display its local values of ''j''. Due to the notification and waiting these values will be those written by process zero, if you comment out the [[Wait|wait]] call then process one will just display zeros.
There are some variation of these calls [[Notifyall|notifyall]] to notify all processes, [[Waitany|waitany]] to wait for a notification from any process and [[Test_notification|test_notification]] to test whether there is a notification from a specific process or not.
#include <io>
#include <string>
#include <parallel>
function void main() {
var j:array[Int,10]::allocated[single[on[2]]];
proc 0 {
var d:array[Int,10];
var i;
for i from 0 to 9 {
d[i]:=i;
};
j:=d;
sync j;
notifyall();
};
proc 1 {
var m:array[Int,10];
var p:=waitany();
m:=j;
sync m;
var i;
for i from 0 to 9 {
print(itostring(m[i])+" written by process "+itostring(p)+"\n");
};
};
proc 2 {
while (!test_notification(0)) { };
var i;
for i from 0 to 9 {
print("Local value is "+itostring(j[i])+"\n");
};
};
};
This example extends the previous one, here ''j'' is held on process two and process zero remotely writes to it and then issues [[Notifyall|notifyall]] to send a notification to every other process. These other two processes could have used the [[Wait|wait]] call as per the previous example, but instead process one will wait on a notification from any process (which returns the ID of the process that issued that notification which is displayed) and process two tests for a notification and loops whilst this returns false.
[[Category: Tutorials]]
4cbdc5b518f8f6d4dae32c294f9edc8b78a1d3df
Notify
0
253
1373
1372
2019-04-15T15:45:05Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This notify(n) function will notify process ''n'', this target process can wait on or test for a notification. This is non-blocking and will continue as soon as the function is called.
* '''Pass:''' an [[Int]] representing the process ID to notify
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
c3159b767a08110c9bce6359ec209bf4298a4f1d
Wait
0
254
1378
1377
2019-04-15T15:45:06Z
Polas
1
4 revisions imported
wikitext
text/x-wiki
== Overview ==
This wait(n) function will block and wait for a notification from process ''n''
* '''Pass:''' an [[Int]] representing the process ID to wait for a notification from
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
wait(1);
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
f793f1450c486b62a1eaf766d1642443ad5d6719
Notifyall
0
255
1380
1379
2019-04-15T15:45:06Z
Polas
1
1 revision imported
wikitext
text/x-wiki
== Overview ==
This notifyall() function will notify all other process, all these target process can wait on or test for a notification. This is non-blocking and will continue as soon as the function is called.
* '''Pass:''' Nothing
* '''Returns:''' Nothing
== Example ==
#include <parallel>
function void main() {
proc 1 {
notifyall();
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
77e3323c114c9cf41cd69acab40366d45a1667d9
Waitany
0
256
1383
1382
2019-04-15T15:45:06Z
Polas
1
2 revisions imported
wikitext
text/x-wiki
== Overview ==
This waitany() function will block and wait for a notification from any process. The id of that process is returned.
* '''Pass:''' Nothing
* '''Returns:''' The id of the process that notified this process.
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
var p:=waitany();
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
31a56ec116dd43d387be75bc1bdc4e16e98d5c12
Test notification
0
257
1387
1386
2019-04-15T15:45:06Z
Polas
1
3 revisions imported
wikitext
text/x-wiki
== Overview ==
This test_notification(n) function will test for a notification from process ''n'', if such a notification is available then this is received (i.e. one need not then call [[Wait|wait]] or [[Waitany|waitany]].)
* '''Pass:''' an [[Int]] representing the process ID to test for a notification from
* '''Returns:''' a [[Bool]] representing whether a notification was received or not
== Example ==
#include <parallel>
function void main() {
proc 1 {
notify(0);
};
proc 0 {
while (!test_notification(1)) { };
};
};
''Since: Version 1.00''
[[Category:Function Library]]
[[Category:Parallel Functions]]
aba0d271e2af22cbc2b194aa3ca7c02505263bde
File:Types.jpg
6
19
1391
109
2019-04-15T16:03:21Z
Polas
1
Polas uploaded [[File:Types.jpg]]
wikitext
text/x-wiki
Type Chain formed when combining types A::B::C::D::E
f1c13468bdd6fb5b43f265520ee5b5f847894873
File:Total.jpg
6
171
1392
922
2019-04-15T16:03:32Z
Polas
1
Polas uploaded [[File:Total.jpg]]
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Total Million Operations per Second
e52f52f4684a6027386206f785248aa917b0cfa9
File:Overview.jpg
6
161
1394
879
2019-04-15T16:04:01Z
Polas
1
Polas uploaded [[File:Overview.jpg]]
wikitext
text/x-wiki
Overview of Translation Process
194801d32004be3229ac704ed630d88f5ac83f55
File:Horiz.jpg
6
92
1395
518
2019-04-15T16:04:11Z
Polas
1
Polas uploaded [[File:Horiz.jpg]]
wikitext
text/x-wiki
Horizontal partitioning of an array via the horizontal type
574c772bfc90f590db956c081c201e3ab506c94b
File:Classc.jpg
6
169
1396
918
2019-04-15T16:04:22Z
Polas
1
Polas uploaded [[File:Classc.jpg]]
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class C
67f08d79b2a9e83a032fb5034744f2ce3905862e
File:Flexdetail.jpg
6
160
1397
877
2019-04-15T16:04:42Z
Polas
1
Polas uploaded [[File:Flexdetail.jpg]]
wikitext
text/x-wiki
Flexibo translation in detail
ed996494fbc47b463d3de57ba1ef36c89c656483
File:Vert.jpg
6
93
1398
520
2019-04-15T16:04:55Z
Polas
1
Polas uploaded [[File:Vert.jpg]]
wikitext
text/x-wiki
Vertical partitioning of an array via the vertical type
bf828b129f970f21341fb2357d36f32a993c68be
File:Bell.gif
6
150
1399
833
2019-04-15T16:05:06Z
Polas
1
Polas uploaded [[File:Bell.gif]]
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
File:Spec.png
6
213
1400
1156
2019-04-15T16:05:19Z
Polas
1
Polas uploaded [[File:Spec.png]]
wikitext
text/x-wiki
Language specification
a6c03d5a30547b6c09595ea22f0dbebbeef99f62
File:Bell.jpg
6
151
1401
835
2019-04-15T16:05:29Z
Polas
1
Polas uploaded [[File:Bell.jpg]]
wikitext
text/x-wiki
Decreasing performance as the number of processors becomes too great
d2a2265a09e2b9959e9c9e4c9eed8f4bbaf7501e
File:Runtimelibrary.png
6
212
1402
1154
2019-04-15T16:05:40Z
Polas
1
Polas uploaded [[File:Runtimelibrary.png]]
wikitext
text/x-wiki
Runtime library icon
4cdf1b63469639f8e3882a9cb001ce3c1443d3fa
File:Mesham.gif
6
211
1403
1152
2019-04-15T16:05:52Z
Polas
1
Polas uploaded [[File:Mesham.gif]]
wikitext
text/x-wiki
Mesham arjuna logo
18147eae74106487894c9dcbd40dd8088e84cfd0
File:Dartboard.jpg
6
138
1404
756
2019-04-15T16:06:02Z
Polas
1
Polas uploaded [[File:Dartboard.jpg]]
wikitext
text/x-wiki
Dartboard
b560bd391a0504dee677d480d1ea12753fef21e9
File:2gb.jpg
6
166
1405
912
2019-04-15T16:06:19Z
Polas
1
Polas uploaded [[File:2gb.jpg]]
wikitext
text/x-wiki
Fast Fourier Transformation with 2GB of data
729d28baa79fd9f53106a7732768ce410b323819
File:Mandlezoom.jpg
6
168
1406
916
2019-04-15T16:07:29Z
Polas
1
Polas uploaded [[File:Mandlezoom.jpg]]
wikitext
text/x-wiki
Mandelbrot Performance Tests
56594bf810192a48e1ce114b660f32c20a23f5a8
File:Mandle.gif
6
136
1407
745
2019-04-15T16:07:40Z
Polas
1
Polas uploaded [[File:Mandle.gif]]
wikitext
text/x-wiki
Mandelbrot example written in Mesham
96c49786466d38afa546f88100b6dd44fa0e0380
File:Process.jpg
6
172
1408
924
2019-04-15T16:07:49Z
Polas
1
Polas uploaded [[File:Process.jpg]]
wikitext
text/x-wiki
NASA's Parallel Benchmark IS Million Operations per Second per Process
5b31c180dca090e6f04338f0483305428ace98e5
File:Pram.gif
6
147
1409
824
2019-04-15T16:07:59Z
Polas
1
Polas uploaded [[File:Pram.gif]]
wikitext
text/x-wiki
Parallel Random Access Machine
b7936ec07dfd143609eabc6862a0c7fa0f6b8b17
File:Evendist.jpg
6
94
1410
522
2019-04-15T16:08:11Z
Polas
1
Polas uploaded [[File:Evendist.jpg]]
wikitext
text/x-wiki
Even distribution of 10 blocks over 4 processors
1831c950976897aab248fe6058609023f0edb3bd
File:Messagepassing.gif
6
148
1411
826
2019-04-15T16:08:22Z
Polas
1
Polas uploaded [[File:Messagepassing.gif]]
wikitext
text/x-wiki
Message Passing based communication
78f5d58106e6dcbc6620f6143e649e393e3eae10
File:Imagep.jpg
6
141
1412
774
2019-04-15T16:08:45Z
Polas
1
Polas uploaded [[File:Imagep.jpg]]
wikitext
text/x-wiki
Example of high and low pass filters operating on an image
44ca822d7d041388db2e0768c033edc01be7d571
File:128.jpg
6
167
1413
914
2019-04-15T16:09:11Z
Polas
1
Polas uploaded [[File:128.jpg]]
wikitext
text/x-wiki
Fast Fourier Transformation example performed with 128MB data
9673f48589455b2c2e20aa52d4982130e782a79c
File:Classb.jpg
6
170
1414
920
2019-04-15T16:09:22Z
Polas
1
Polas uploaded [[File:Classb.jpg]]
wikitext
text/x-wiki
NASA's Parallel Benchmark IS class B
8d320be9de4ed6ba04c6c52f56a8c0132f826055